[jira] [Commented] (TS-2307) Range request with If-Range does not work

2013-11-12 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13820814#comment-13820814
 ] 

jaekyung oh commented on TS-2307:
-

i thought If-Range header doesn't have any defined format to distinguish ETAG 
and date. But to determine if the value of If-Range header is ETAG or date 
don't we check if it is enclosed in double quotation marks?

ex.
If-Range :    --- means the 
value is ETAG
If-Range : Fri, 24 May 2013 01:38:13 GMT --- means the value is date

but in this case,
Originally Origin server sent If-Range : . actually meaning ETAG not date! 
Anyway traffic server saved that ETAG value in cache.

then client sends request for the same content with If-Range : (caution: 
ETAG was not enclosed by double quotation mark because origin server sent it 
without quotation marks). 

Consequently traffic server regards it as date and falls in comparing date 
routine.

as you can see, traffic server return HTTP_STATUS_RANGE_NOT_SATISFIABLE 
nevertheless cache has that ETAG value and client asked with the very ETAG 
value.

i'm confused if that is a bug of traffic server or that is the bug of origin. 
what do you think?

 Range request with If-Range does not work
 -

 Key: TS-2307
 URL: https://issues.apache.org/jira/browse/TS-2307
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.2.5, 4.0.1, 4.0.2
Reporter: Jungwoo Lee
  Labels: A
 Fix For: 4.2.0


 1. Precondition
  - Upload file such as video or music file on Origin server
  - On Chrome, access to the content file
  - Repeat followings
 -- Delete the cache of Chrome
 -- Refresh( press F5 )
 2. Result
  - Chrome does not play the content.
 3. Cause
  - When Chrome requests including Range and If-Range headers, the value of 
 If-Range header can be set to the one of ETAG and Last Modified Date. ATS 
 core has unreasonable condition to check if the value of If-Range is ETAG and 
 it makes a bug that the value of If-Range will be compared with Last Modified 
 Date event if ETAG is set to the value of If-Range.
 As a result, response header does not include Content-Range when the value of 
 If-Range is ETAG. Sometimes this makes client abort.
  - The condition to check ETAG is following( 
 HttpTransactCache::match_response_to_request_conditionals(HTTPHdr * request, 
 HTTPHdr * response) function )
-- if (!if_value || if_value[0] == '' || (comma_sep_list_len  1  
 if_value[1] == '/'))
--- when ETAG doesn't start and end with  this condition will be failed.
-- The if_value points the string of value of If-Range
 4. Expected Behaviour
  - Video and music file will be played in all the time on all case.
   -- When the value of If-Range is ETAG and is matched with ETAG of header of 
 cached content , response should include the header related with range 
 request.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2262) range request for cached content with small size(around 2k bytes) fails.

2013-10-05 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13787452#comment-13787452
 ] 

jaekyung oh commented on TS-2262:
-

i tried same test with ats 4.0.1 and it works fine.

now i can see we'd better move to ats 4.0.1 from ats 3.2.4. 

The problem is...
1. we(especially op team) are not familiar with ats 4.0.1 because some 
configuration styles were changed and they are confused. so op team don't feel 
safe to move to ats 4.0.1 until they get familiar with it.
2. we should fix it on ats 3.2.4 right now because one of our customer is 
complaing about it.

Could i get simple patch for that? Is there only one way(moving to ats 4.0.1) 
to fix our urgent problem?

thank you Leif. 

 range request for cached content with small size(around 2k bytes) fails.
 

 Key: TS-2262
 URL: https://issues.apache.org/jira/browse/TS-2262
 Project: Traffic Server
  Issue Type: Bug
Reporter: jaekyung oh

 after cache a content with 2k bytes size, range request for it fails showing 
 timeout.
 Version : ATS 3.2.4
 curl -v -o /dev/null --range 100-200 http://ats-test.test.net/1-test.2k; 
 shows
 * About to connect() to ats-test.test.net port 80 (#0)
 *   Trying 110.45.197.30...   % Total% Received % Xferd  Average Speed   
 TimeTime Time  Current
  Dload  Upload   Total   SpentLeft  Speed
   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
 0connected
 * Connected to ats-test.test.net (xxx.xxx.xxx.xxx) port 80 (#0)
  GET ats-test.test.net/1-test.2k HTTP/1.1
  Range: bytes=100-200
  User-Agent: curl/7.20.1 (x86_64-unknown-linux-gnu) libcurl/7.20.1 
  OpenSSL/1.0.0 zlib/1.2.5 libidn/1.15 libssh2/1.2.2_DEV
  Host: ats-test.test.net
  Accept: */*
  
  HTTP/1.1 206 Partial Content
  Accept-Ranges: bytes
  ETag: 2429143783
  Last-Modified: Mon, 22 Apr 2013 07:46:30 GMT
  Date: Tue, 01 Oct 2013 09:15:00 GMT
  Server: ATS/3.2.4.3.0
  Content-Type: multipart/byteranges; boundary=RANGE_SEPARATOR
  Content-Length: 1000
  Age: 172
  Connection: keep-alive
  
 { [data not shown]
   0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 
 0* transfer closed with 999 bytes remaining to read
   0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 
 0* Closing connection #0
 curl: (18) transfer closed with 999 bytes remaining to read
 is there any limit on the size of contents for range request processing?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (TS-2262) range request for cached content with small size(around 2k bytes) fails.

2013-10-01 Thread jaekyung oh (JIRA)
jaekyung oh created TS-2262:
---

 Summary: range request for cached content with small size(around 
2k bytes) fails.
 Key: TS-2262
 URL: https://issues.apache.org/jira/browse/TS-2262
 Project: Traffic Server
  Issue Type: Bug
Reporter: jaekyung oh


after cache a content with 2k bytes size, range request for it fails showing 
timeout.

Version : ATS 3.2.4

curl -v -o /dev/null --range 100-200 http://ats-test.test.net/1-test.2k; shows

* About to connect() to ats-test.test.net port 80 (#0)
*   Trying 110.45.197.30...   % Total% Received % Xferd  Average Speed   
TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0connected
* Connected to ats-test.test.net (xxx.xxx.xxx.xxx) port 80 (#0)
 GET ats-test.test.net/1-test.2k HTTP/1.1
 Range: bytes=100-200
 User-Agent: curl/7.20.1 (x86_64-unknown-linux-gnu) libcurl/7.20.1 
 OpenSSL/1.0.0 zlib/1.2.5 libidn/1.15 libssh2/1.2.2_DEV
 Host: ats-test.test.net
 Accept: */*
 
 HTTP/1.1 206 Partial Content
 Accept-Ranges: bytes
 ETag: 2429143783
 Last-Modified: Mon, 22 Apr 2013 07:46:30 GMT
 Date: Tue, 01 Oct 2013 09:15:00 GMT
 Server: ATS/3.2.4.3.0
 Content-Type: multipart/byteranges; boundary=RANGE_SEPARATOR
 Content-Length: 1000
 Age: 172
 Connection: keep-alive
 
{ [data not shown]
  0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 0* 
transfer closed with 999 bytes remaining to read
  0  10000 10 0  0  0 --:--:--  0:00:30 --:--:-- 0* 
Closing connection #0

curl: (18) transfer closed with 999 bytes remaining to read

is there any limit on the size of contents for range request processing?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-1949) a range request cause crash.

2013-09-08 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761597#comment-13761597
 ] 

jaekyung oh commented on TS-1949:
-

The 2nd range request doesn't make traffic server crashed, instead it is 
pending until download is completed.

anyway no crash happens.

Thanks.


 a range request cause crash.
 

 Key: TS-1949
 URL: https://issues.apache.org/jira/browse/TS-1949
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: jaekyung oh
Assignee: Alan M. Carroll
 Fix For: 3.2.6


 on ats 3.2.4 when read_while_writer is 1,
 a range request for a content(not cached) is ok but same request make a 
 problem while the content is under downloading...
 To make crash --
 1. First try to get a big content.
 wget -O /dev/null  http://www.test.com/mp4/aa.mp4
 9% [===  
  ] 29,068,852  54.2M/s
 2. before the first request gets done, send a range request for the last part 
 of the content.
 curl --range 3-301046986  http://www.test.com/mp4/aa.mp4 2/dev/null 
 1 /dev/null
 traffic.out shows 
 + Incoming Request +
 -- State Machine Id: 2
 GET http://origin.test.com/mp4/aa.mp4 HTTP/1.1
 Range: bytes=3-301046986
 User-Agent: curl/7.21.2 (x86_64-unknown-linux-gnu) libcurl/7.21.2 
 OpenSSL/1.0.0c zlib/1.2.5 libidn/1.15 libssh2/1.2.7
 Host: www.test.com
 Accept: */*
 + Header To Transform +
 -- State Machine Id: 2
 HTTP/1.1 200 OK
 Content-Type: video/mp4
 Accept-Ranges: bytes
 ETag: 314612005
 Last-Modified: Wed, 13 Feb 2013 12:37:21 GMT
 Date: Mon, 10 Jun 2013 08:08:14 GMT
 Server: lighttpd/1.4.28
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/ats324/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0(+0xfd00)[0x2ba8e23c7d00]
 /usr/local/ats324/bin/traffic_server(_ZN7CacheVC12openReadMainEiP5Event+0x93)[0x659493]
 /usr/local/ats324/bin/traffic_server[0x658abf]
 /usr/local/ats324/bin/traffic_server(_ZN7CacheVC21openReadStartEarliestEiP5Event+0x6fa)[0x65cf4a]
 /usr/local/ats324/bin/traffic_server(_ZN7CacheVC14handleReadDoneEiP5Event+0x1c2)[0x6376d2]
 /usr/local/ats324/bin/traffic_server(_ZN19AIOCallbackInternal11io_completeEiPv+0x3d)[0x63804d]
 /usr/local/ats324/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x90)[0x6b2ed0]
 /usr/local/ats324/bin/traffic_server(_ZN7EThread7executeEv+0x5eb)[0x6b3a8b]
 /usr/local/ats324/bin/traffic_server[0x6b1cc2]
 /lib64/libpthread.so.0(+0x7f05)[0x2ba8e23bff05]
 /lib64/libc.so.6(clone+0x6d)[0x2ba8e461010d]
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL: 
 [LocalManager::pollMgmtProcessServer] Error in read (errno: 104)
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL:  (last system error 
 104: Connection reset by peer)
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
 [LocalManager::mgmtShutdown] Executing shutdown request.
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
 [LocalManager::processShutdown] Executing process shutdown request.
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR: 
 [LocalManager::sendMgmtMsgToProcesses] Error writing message
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR:  (last system error 32: 
 Broken pipe)
 [E. Mgmt] log == [TrafficManager] using root directory '/usr/local/ats324'
 [Jun 10 17:08:19.871] {0x7f567e260720} STATUS: opened 
 /usr/local/ats324/var/log/trafficserver/manager.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1949) a range request cause crash.

2013-09-04 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13758615#comment-13758615
 ] 

jaekyung oh commented on TS-1949:
-

i have another job now so i'll try it next week and report here.

 a range request cause crash.
 

 Key: TS-1949
 URL: https://issues.apache.org/jira/browse/TS-1949
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: jaekyung oh
Assignee: Alan M. Carroll
 Fix For: 3.2.6


 on ats 3.2.4 when read_while_writer is 1,
 a range request for a content(not cached) is ok but same request make a 
 problem while the content is under downloading...
 To make crash --
 1. First try to get a big content.
 wget -O /dev/null  http://www.test.com/mp4/aa.mp4
 9% [===  
  ] 29,068,852  54.2M/s
 2. before the first request gets done, send a range request for the last part 
 of the content.
 curl --range 3-301046986  http://www.test.com/mp4/aa.mp4 2/dev/null 
 1 /dev/null
 traffic.out shows 
 + Incoming Request +
 -- State Machine Id: 2
 GET http://origin.test.com/mp4/aa.mp4 HTTP/1.1
 Range: bytes=3-301046986
 User-Agent: curl/7.21.2 (x86_64-unknown-linux-gnu) libcurl/7.21.2 
 OpenSSL/1.0.0c zlib/1.2.5 libidn/1.15 libssh2/1.2.7
 Host: www.test.com
 Accept: */*
 + Header To Transform +
 -- State Machine Id: 2
 HTTP/1.1 200 OK
 Content-Type: video/mp4
 Accept-Ranges: bytes
 ETag: 314612005
 Last-Modified: Wed, 13 Feb 2013 12:37:21 GMT
 Date: Mon, 10 Jun 2013 08:08:14 GMT
 Server: lighttpd/1.4.28
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/ats324/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0(+0xfd00)[0x2ba8e23c7d00]
 /usr/local/ats324/bin/traffic_server(_ZN7CacheVC12openReadMainEiP5Event+0x93)[0x659493]
 /usr/local/ats324/bin/traffic_server[0x658abf]
 /usr/local/ats324/bin/traffic_server(_ZN7CacheVC21openReadStartEarliestEiP5Event+0x6fa)[0x65cf4a]
 /usr/local/ats324/bin/traffic_server(_ZN7CacheVC14handleReadDoneEiP5Event+0x1c2)[0x6376d2]
 /usr/local/ats324/bin/traffic_server(_ZN19AIOCallbackInternal11io_completeEiPv+0x3d)[0x63804d]
 /usr/local/ats324/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x90)[0x6b2ed0]
 /usr/local/ats324/bin/traffic_server(_ZN7EThread7executeEv+0x5eb)[0x6b3a8b]
 /usr/local/ats324/bin/traffic_server[0x6b1cc2]
 /lib64/libpthread.so.0(+0x7f05)[0x2ba8e23bff05]
 /lib64/libc.so.6(clone+0x6d)[0x2ba8e461010d]
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL: 
 [LocalManager::pollMgmtProcessServer] Error in read (errno: 104)
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL:  (last system error 
 104: Connection reset by peer)
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
 [LocalManager::mgmtShutdown] Executing shutdown request.
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
 [LocalManager::processShutdown] Executing process shutdown request.
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR: 
 [LocalManager::sendMgmtMsgToProcesses] Error writing message
 [Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR:  (last system error 32: 
 Broken pipe)
 [E. Mgmt] log == [TrafficManager] using root directory '/usr/local/ats324'
 [Jun 10 17:08:19.871] {0x7f567e260720} STATUS: opened 
 /usr/local/ats324/var/log/trafficserver/manager.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1955) read_while_writer sends wrong response for range request.

2013-06-14 Thread jaekyung oh (JIRA)
jaekyung oh created TS-1955:
---

 Summary: read_while_writer sends wrong response for range request.
 Key: TS-1955
 URL: https://issues.apache.org/jira/browse/TS-1955
 Project: Traffic Server
  Issue Type: Bug
Reporter: jaekyung oh


Basically read_while_writer works fine when ATS handles normal file.

In progressive download and playback of mp4 in which moov atom is placed at the 
end of the file, ATS makes and returns wrong response for range request from 
unfulfilled cache when read_while_writer is 1.

In origin, apache has h264 streaming module. Everything is ok whether the moov 
atom is placed at the beginning of the file or not in origin except a range 
request happens with read_while_writer.

Mostly our customer’s contents placed moov atom at the end of the file and in 
the case movie player stops playing when it seek somewhere in the movie.

to check if read_while_writer works fine,
1. prepare a mp4 file whose moov atom is placed at the end of the file.
2. curl --range - http://www.test.com/mp4/test.mp4 1 
no_cache_from_origin 
3. wget http://www.test.com/mp4/test.mp4
4. right after wget, execute “curl --range - 
http://www.test.com/mp4/test.mp4 1 from_read_while_writer” on other terminal
(the point is sending range request while ATS is still downloading)
5. after wget gets done, curl --range - 
http://www.test.com/mp4/test.mp4 1 from_cache
6. you can check compare those files by bindiff.

The response from origin(no_cache_from_origin) for the range request is exactly 
same to from_cache resulted from #5's range request. but from_read_while_writer 
from #4 is totally different from others.

i think a range request should be forwarded to origin server if it can’t find 
the content with the offset in cache even if the read_while_writer is on, 
instead ATS makes(from where?) and sends wrong response. (In squid.log it 
indicates TCP_HIT)

That’s why a movie player stops when it seeks right after the movie starts.

Well. we turned off read_while_writer and movie play is ok but the problems is 
read_while_writer is global options. we can’t set it differently for each remap 
entry by conf_remap.

So the downloading of Big file(not mp4 file) gives overhead to origin server 
because read_while_writer is off.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (TS-1955) read_while_writer sends wrong response for range request.

2013-06-14 Thread jaekyung oh (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaekyung oh updated TS-1955:


  Component/s: Core
Affects Version/s: 3.2.4

 read_while_writer sends wrong response for range request.
 -

 Key: TS-1955
 URL: https://issues.apache.org/jira/browse/TS-1955
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Affects Versions: 3.2.4
Reporter: jaekyung oh

 Basically read_while_writer works fine when ATS handles normal file.
 In progressive download and playback of mp4 in which moov atom is placed at 
 the end of the file, ATS makes and returns wrong response for range request 
 from unfulfilled cache when read_while_writer is 1.
 In origin, apache has h264 streaming module. Everything is ok whether the 
 moov atom is placed at the beginning of the file or not in origin except a 
 range request happens with read_while_writer.
 Mostly our customer’s contents placed moov atom at the end of the file and in 
 the case movie player stops playing when it seek somewhere in the movie.
 to check if read_while_writer works fine,
 1. prepare a mp4 file whose moov atom is placed at the end of the file.
 2. curl --range - http://www.test.com/mp4/test.mp4 1 
 no_cache_from_origin 
 3. wget http://www.test.com/mp4/test.mp4
 4. right after wget, execute “curl --range - 
 http://www.test.com/mp4/test.mp4 1 from_read_while_writer” on other terminal
 (the point is sending range request while ATS is still downloading)
 5. after wget gets done, curl --range - 
 http://www.test.com/mp4/test.mp4 1 from_cache
 6. you can check compare those files by bindiff.
 The response from origin(no_cache_from_origin) for the range request is 
 exactly same to from_cache resulted from #5's range request. but 
 from_read_while_writer from #4 is totally different from others.
 i think a range request should be forwarded to origin server if it can’t find 
 the content with the offset in cache even if the read_while_writer is on, 
 instead ATS makes(from where?) and sends wrong response. (In squid.log it 
 indicates TCP_HIT)
 That’s why a movie player stops when it seeks right after the movie starts.
 Well. we turned off read_while_writer and movie play is ok but the problems 
 is read_while_writer is global options. we can’t set it differently for each 
 remap entry by conf_remap.
 So the downloading of Big file(not mp4 file) gives overhead to origin server 
 because read_while_writer is off.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (TS-1949) a range request cause crash.

2013-06-10 Thread jaekyung oh (JIRA)
jaekyung oh created TS-1949:
---

 Summary: a range request cause crash.
 Key: TS-1949
 URL: https://issues.apache.org/jira/browse/TS-1949
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: jaekyung oh


on ats 3.2.4 when read_while_writer is 1,

a range request for a content(not cached) is ok but same request make a problem 
while the content is under downloading...

To make crash --
1. First try to get a big content.
wget -O /dev/null  http://www.test.com/mp4/aa.mp4
9% [===
   ] 29,068,852  54.2M/s

2. before the first request gets done, send a range request for the last part 
of the content.
curl --range 3-301046986  http://www.test.com/mp4/aa.mp4 2/dev/null 1 
/dev/null


traffic.out shows 

+ Incoming Request +
-- State Machine Id: 2
GET http://origin.test.com/mp4/aa.mp4 HTTP/1.1
Range: bytes=3-301046986
User-Agent: curl/7.21.2 (x86_64-unknown-linux-gnu) libcurl/7.21.2 
OpenSSL/1.0.0c zlib/1.2.5 libidn/1.15 libssh2/1.2.7
Host: www.test.com
Accept: */*

+ Header To Transform +
-- State Machine Id: 2
HTTP/1.1 200 OK
Content-Type: video/mp4
Accept-Ranges: bytes
ETag: 314612005
Last-Modified: Wed, 13 Feb 2013 12:37:21 GMT
Date: Mon, 10 Jun 2013 08:08:14 GMT
Server: lighttpd/1.4.28

NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/ats324/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0(+0xfd00)[0x2ba8e23c7d00]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC12openReadMainEiP5Event+0x93)[0x659493]
/usr/local/ats324/bin/traffic_server[0x658abf]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC21openReadStartEarliestEiP5Event+0x6fa)[0x65cf4a]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC14handleReadDoneEiP5Event+0x1c2)[0x6376d2]
/usr/local/ats324/bin/traffic_server(_ZN19AIOCallbackInternal11io_completeEiPv+0x3d)[0x63804d]
/usr/local/ats324/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x90)[0x6b2ed0]
/usr/local/ats324/bin/traffic_server(_ZN7EThread7executeEv+0x5eb)[0x6b3a8b]
/usr/local/ats324/bin/traffic_server[0x6b1cc2]
/lib64/libpthread.so.0(+0x7f05)[0x2ba8e23bff05]
/lib64/libc.so.6(clone+0x6d)[0x2ba8e461010d]
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL: 
[LocalManager::pollMgmtProcessServer] Error in read (errno: 104)
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL:  (last system error 104: 
Connection reset by peer)
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
[LocalManager::mgmtShutdown] Executing shutdown request.
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
[LocalManager::processShutdown] Executing process shutdown request.
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR: 
[LocalManager::sendMgmtMsgToProcesses] Error writing message
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR:  (last system error 32: 
Broken pipe)
[E. Mgmt] log == [TrafficManager] using root directory '/usr/local/ats324'
[Jun 10 17:08:19.871] {0x7f567e260720} STATUS: opened 
/usr/local/ats324/var/log/trafficserver/manager.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (TS-1949) a range request cause crash.

2013-06-10 Thread jaekyung oh (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaekyung oh updated TS-1949:


Description: 
on ats 3.2.4 when read_while_writer is 1,

a range request for a content(not cached) is ok but same request make a problem 
while the content is under downloading...

To make crash --
1. First try to get a big content.
wget -O /dev/null  http://www.test.com/mp4/aa.mp4
9% [===
   ] 29,068,852  54.2M/s

2. before the first request gets done, send a range request for the last part 
of the content.
curl --range 3-301046986  http://www.test.com/mp4/aa.mp4 2/dev/null 1 
/dev/null


traffic.out shows 

+ Incoming Request +
-- State Machine Id: 2
GET http://origin.test.com/mp4/aa.mp4 HTTP/1.1
Range: bytes=3-301046986
User-Agent: curl/7.21.2 (x86_64-unknown-linux-gnu) libcurl/7.21.2 
OpenSSL/1.0.0c zlib/1.2.5 libidn/1.15 libssh2/1.2.7
Host: www.test.com
Accept: */*

+ Header To Transform +
-- State Machine Id: 2
HTTP/1.1 200 OK
Content-Type: video/mp4
Accept-Ranges: bytes
ETag: 314612005
Last-Modified: Wed, 13 Feb 2013 12:37:21 GMT
Date: Mon, 10 Jun 2013 08:08:14 GMT
Server: lighttpd/1.4.28

NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/ats324/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0(+0xfd00)[0x2ba8e23c7d00]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC12openReadMainEiP5Event+0x93)[0x659493]
/usr/local/ats324/bin/traffic_server[0x658abf]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC21openReadStartEarliestEiP5Event+0x6fa)[0x65cf4a]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC14handleReadDoneEiP5Event+0x1c2)[0x6376d2]
/usr/local/ats324/bin/traffic_server(_ZN19AIOCallbackInternal11io_completeEiPv+0x3d)[0x63804d]
/usr/local/ats324/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x90)[0x6b2ed0]
/usr/local/ats324/bin/traffic_server(_ZN7EThread7executeEv+0x5eb)[0x6b3a8b]
/usr/local/ats324/bin/traffic_server[0x6b1cc2]
/lib64/libpthread.so.0(+0x7f05)[0x2ba8e23bff05]
/lib64/libc.so.6(clone+0x6d)[0x2ba8e461010d]
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL: 
[LocalManager::pollMgmtProcessServer] Error in read (errno: 104)
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} FATAL:  (last system error 104: 
Connection reset by peer)
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
[LocalManager::mgmtShutdown] Executing shutdown request.
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} NOTE: 
[LocalManager::processShutdown] Executing process shutdown request.
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR: 
[LocalManager::sendMgmtMsgToProcesses] Error writing message
[Jun 10 17:08:19.846] Manager {0x7fa88b6a0720} ERROR:  (last system error 32: 
Broken pipe)
[E. Mgmt] log == [TrafficManager] using root directory '/usr/local/ats324'
[Jun 10 17:08:19.871] {0x7f567e260720} STATUS: opened 
/usr/local/ats324/var/log/trafficserver/manager.log

  was:
on ats 3.2.4 when read_while_writer is 1,

a range request for a content(not cached) is ok but same request make a problem 
while the content is under downloading...

To make crash --
1. First try to get a big content.
wget -O /dev/null  http://www.test.com/mp4/aa.mp4
9% [===
   ] 29,068,852  54.2M/s

2. before the first request gets done, send a range request for the last part 
of the content.
curl --range 3-301046986  http://www.test.com/mp4/aa.mp4 2/dev/null 1 
/dev/null


traffic.out shows 

+ Incoming Request +
-- State Machine Id: 2
GET http://origin.test.com/mp4/aa.mp4 HTTP/1.1
Range: bytes=3-301046986
User-Agent: curl/7.21.2 (x86_64-unknown-linux-gnu) libcurl/7.21.2 
OpenSSL/1.0.0c zlib/1.2.5 libidn/1.15 libssh2/1.2.7
Host: www.test.com
Accept: */*

+ Header To Transform +
-- State Machine Id: 2
HTTP/1.1 200 OK
Content-Type: video/mp4
Accept-Ranges: bytes
ETag: 314612005
Last-Modified: Wed, 13 Feb 2013 12:37:21 GMT
Date: Mon, 10 Jun 2013 08:08:14 GMT
Server: lighttpd/1.4.28

NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/ats324/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0(+0xfd00)[0x2ba8e23c7d00]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC12openReadMainEiP5Event+0x93)[0x659493]
/usr/local/ats324/bin/traffic_server[0x658abf]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC21openReadStartEarliestEiP5Event+0x6fa)[0x65cf4a]
/usr/local/ats324/bin/traffic_server(_ZN7CacheVC14handleReadDoneEiP5Event+0x1c2)[0x6376d2]
/usr/local/ats324/bin/traffic_server(_ZN19AIOCallbackInternal11io_completeEiPv+0x3d)[0x63804d]
/usr/local/ats324/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x90)[0x6b2ed0]
/usr/local/ats324/bin/traffic_server(_ZN7EThread7executeEv+0x5eb)[0x6b3a8b]

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-04-01 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13618711#comment-13618711
 ] 

jaekyung oh commented on TS-1006:
-

Hi. Yunkai Zhang.

I tried again with ./configure --enable-reclaimable-freelist

but same happens...

these are the part of traffic.out when i executed traffic_line -x


 360192 | 360192 |672 | memory/netVCAllocator
  0 |  0 |120 | 
memory/udpReadContAllocator
  0 |  0 |176 | 
memory/udpPacketAllocator
  0 |  0 |384 | memory/socksAllocator
  0 |  0 |128 | 
memory/UDPIOEventAllocator
  16256 |  16256 | 64 | memory/ioBlockAllocator
   8064 |   8064 | 48 | memory/ioDataAllocator
  32480 |  32480 |232 | memory/ioAllocator
 109512 | 109512 | 72 | memory/mutexAllocator
 318032 | 318032 | 88 | memory/eventAllocator
 268288 | 268288 |   1024 | memory/ArenaBlock
[Apr  1 18:18:35.010] Manager {0x7f25beffd700} NOTE: User has changed config 
file records.config
[Apr  1 18:18:36.678] Server {0x2b989b06b700} NOTE: cache enabled
[2b989b06b700:01][ink_queue_ext.cc:00577][F]  13.28M t:278 f:274  m:0
avg:0.0M:4csbase:256  csize:278  tsize:88 cbsize:24576
[2b989b06b700:01][ink_queue_ext.cc:00584][-]  13.28M t:278 f:274  m:0
avg:0.0M:4csbase:256  csize:278  tsize:88 cbsize:24576
[2b989b06b700:01][ink_queue_ext.cc:00577][F]  13.32M t:278 f:274  m:0
avg:82.2   M:4csbase:256  csize:278  tsize:88 cbsize:24576
[2b989b06b700:01][ink_queue_ext.cc:00584][-]  13.32M t:196 f:192  m:0
avg:82.2   M:4csbase:256  csize:278  tsize:88 cbsize:24576
NOTE: Traffic Server received Sig 11: Segmentation fault
 what happened after logging?
/usr/local/nts/bin/traffic_server - STACK TRACE:
/lib64/libpthread.so.0(+0xf2d0)[0x2b9897f1f2d0]
[0x2b989b274f10]
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR: 
[LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 11: 
Segmentation fault
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR:  (last system error 2: No 
such file or directory)
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR: [Alarms::signalAlarm] 
Server Process was reset
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR:  (last system error 2: No 
such file or directory)
[Apr  1 18:18:40.803] Manager {0x7f25c7ec0720} NOTE: [LocalManager::startProxy] 
Launching ts process
[TrafficServer] using root directory '/usr/local/nts'
[Apr  1 18:18:40.813] Manager {0x7f25c7ec0720} NOTE: 
[LocalManager::pollMgmtProcessServer] New process connecting fd '10'
[Apr  1 18:18:40.813] Manager {0x7f25c7ec0720} NOTE: [Alarms::signalAlarm] 
Server Process born
.



 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-03-27 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614983#comment-13614983
 ] 

jaekyung oh commented on TS-1006:
-

yes. with last final patch i always put --enable-reclaimable-freelist=yes

same error happens on both 3.2.0 and 3.2.4

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-03-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614856#comment-13614856
 ] 

jaekyung oh commented on TS-1006:
-

i've applied final patch on 3.2.0 and 3.2.4. it seems fine without debug 
options.

but when i tried turning on CONFIG proxy.config.allocator.debug_filter INT 0 - 
1, no debug message was printed.

so i tried to turn on CONFIG proxy.config.dump_mem_info_frequency INT 0 - 1, 
memory dump message were printed. 

but if i repeat it 3times(0 - 1 - traffic_line -x - 1 - 0 - traffic_line 
-x is one time) traffic server stops showing these messages.

[Mar 26 22:16:56.218] Manager {0x7f1d6effd700} NOTE: User has changed config 
file records.config
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE:
 /lib64/libpthread.so.0(+0xf2d0)[0x2b59c7a3a2d0]
 [0x2b59cc49d3e0]
 [Mar 26 22:17:02.986] Manager {0x7f1d77a93720} ERROR: 
[LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 11: 
Segmentation fault

[Mar 26 22:30:10.508] Manager {0x7f7d57fff700} NOTE: User has changed config 
file records.config
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE:
 /lib64/libpthread.so.0(+0xfd00)[0x2b3998975d00]
 
/usr/local/bin/traffic_server(_Z12init_trackerPKc8RecDataT7RecDataPv+0x31)[0x4eb6f1]
 /usr/local/bin/traffic_server(_Z22RecExecConfigUpdateCbsv+0x79)[0x6a9c99]
 
/usr/local/bin/traffic_server(_ZN18config_update_cont14exec_callbacksEiP5Event+0x26)[0x6ab976]
 /usr/local/bin/traffic_server(_ZN7EThread7executeEv+0xb93)[0x6b58e3]
 /usr/local/bin/traffic_server[0x6b3572]
 /lib64/libpthread.so.0(+0x7f05)[0x2b399896df05]
 /lib64/libc.so.6(clone+0x6d)[0x2b399abbe10d]


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 

[jira] [Commented] (TS-1551) ssl_multicert.config not reread with traffic_line -x

2013-02-24 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585564#comment-13585564
 ] 

jaekyung oh commented on TS-1551:
-

Hi!

First I was looking for the patch for TS-1640 and now James Peach told me to 
apply this patch.

with 3.2.0 and 3.2.4, I had review on patch file and some codes are different 
from 3.2.0/3.2.4(For example addInfoToHash).

Maybe there would be pre-patch like TS-1550 for this patch? if so let me know 
all pre-patch.

if not so, is it really ok if i just apply this patch to ats 3.2.0/3.2.4 even 
if addInfoToHash is different between patch and 3.2.0/3.2.4?

I need your confirm. thanks.

 ssl_multicert.config not reread with traffic_line -x
 

 Key: TS-1551
 URL: https://issues.apache.org/jira/browse/TS-1551
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration, SSL
Affects Versions: 3.2.0
 Environment: RHEL 6
Reporter: Ethan Lai
Assignee: James Peach
Priority: Minor
 Fix For: 3.3.1

 Attachments: ssl_multicert_reconfigure.patch


 Found that ssl_multicert.config is marked as modified, but not reread while 
 running traffic_line -x (Reread Config Files).
 Just wondering is this expected behavior or not?
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 [Oct 26 09:59:45.018] Manager {0x7f3c6723d7e0} NOTE: 
 [LocalManager::startProxy] Launching ts process
 [Oct 26 09:59:45.025] Manager {0x7f3c6723d7e0} NOTE: 
 [LocalManager::pollMgmtProcessServer] New process connecting fd '12'
 [Oct 26 09:59:45.025] Manager {0x7f3c6723d7e0} NOTE: [Alarms::signalAlarm] 
 Server Process born
 [Oct 26 09:59:46.066] Server {0x2b500a320680} DEBUG: (ssl) 
 ssl_multicert.config: /usr/local/etc/trafficserver/ssl_multicert.config
 [Oct 26 09:59:46.094] Server {0x2b500a320680} DEBUG: (ssl) mapping 
 'j1.free888.cloudns.biz' to certificate 
 /usr/local/etc/ats-cert/j1.free888.cloudns.biz-v2.pem
 [Oct 26 09:59:46.096] Server {0x2b500a320680} NOTE: logging initialized[15], 
 logging_mode = 3
 [Oct 26 09:59:46.126] Server {0x2b500a320680} NOTE: traffic server running
 $ sed -i 's/j1.free888.cloudns.biz-v2/j1.free888.cloudns.biz-v3/'  
 /usr/local/etc/trafficserver/ssl_multicert.config
 $ `trafflic_line -x`
 [Oct 26 09:59:59.954] Manager {0x7f3c5700} DEBUG: (rollback) 
 [Rollback::internalUpdate] Moving ssl_multicert.config from version 43 to 
 version 44
 [Oct 26 09:59:59.970] Manager {0x7f3c5700} NOTE: [fileUpdated] 
 ssl_multicert.config file has been modified
 [Oct 26 09:59:59.970] Manager {0x7f3c5700} NOTE: User has changed config 
 file ssl_multicert.config
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 No DEBUG: (ssl) mapping 'j1.free888.cloudns.biz' to certificate 
 /usr/local/etc/ats-cert/j1.free888.cloudns.biz-v3.pem message found.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1551) ssl_multicert.config not reread with traffic_line -x

2013-02-24 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585577#comment-13585577
 ] 

jaekyung oh commented on TS-1551:
-

Thank you Ethan.

 ssl_multicert.config not reread with traffic_line -x
 

 Key: TS-1551
 URL: https://issues.apache.org/jira/browse/TS-1551
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration, SSL
Affects Versions: 3.2.0
 Environment: RHEL 6
Reporter: Ethan Lai
Assignee: James Peach
Priority: Minor
 Fix For: 3.3.1

 Attachments: sslcertlookup.cc.patch-3.2.0, 
 ssl_multicert_reconfigure.patch


 Found that ssl_multicert.config is marked as modified, but not reread while 
 running traffic_line -x (Reread Config Files).
 Just wondering is this expected behavior or not?
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 [Oct 26 09:59:45.018] Manager {0x7f3c6723d7e0} NOTE: 
 [LocalManager::startProxy] Launching ts process
 [Oct 26 09:59:45.025] Manager {0x7f3c6723d7e0} NOTE: 
 [LocalManager::pollMgmtProcessServer] New process connecting fd '12'
 [Oct 26 09:59:45.025] Manager {0x7f3c6723d7e0} NOTE: [Alarms::signalAlarm] 
 Server Process born
 [Oct 26 09:59:46.066] Server {0x2b500a320680} DEBUG: (ssl) 
 ssl_multicert.config: /usr/local/etc/trafficserver/ssl_multicert.config
 [Oct 26 09:59:46.094] Server {0x2b500a320680} DEBUG: (ssl) mapping 
 'j1.free888.cloudns.biz' to certificate 
 /usr/local/etc/ats-cert/j1.free888.cloudns.biz-v2.pem
 [Oct 26 09:59:46.096] Server {0x2b500a320680} NOTE: logging initialized[15], 
 logging_mode = 3
 [Oct 26 09:59:46.126] Server {0x2b500a320680} NOTE: traffic server running
 $ sed -i 's/j1.free888.cloudns.biz-v2/j1.free888.cloudns.biz-v3/'  
 /usr/local/etc/trafficserver/ssl_multicert.config
 $ `trafflic_line -x`
 [Oct 26 09:59:59.954] Manager {0x7f3c5700} DEBUG: (rollback) 
 [Rollback::internalUpdate] Moving ssl_multicert.config from version 43 to 
 version 44
 [Oct 26 09:59:59.970] Manager {0x7f3c5700} NOTE: [fileUpdated] 
 ssl_multicert.config file has been modified
 [Oct 26 09:59:59.970] Manager {0x7f3c5700} NOTE: User has changed config 
 file ssl_multicert.config
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 No DEBUG: (ssl) mapping 'j1.free888.cloudns.biz' to certificate 
 /usr/local/etc/ats-cert/j1.free888.cloudns.biz-v3.pem message found.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1551) ssl_multicert.config not reread with traffic_line -x

2013-02-24 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585665#comment-13585665
 ] 

jaekyung oh commented on TS-1551:
-

and still there are diffrerent codes between 3.2.0 and 3.2.4. 
(SSLNextProtocolAccept.cc, SSLNetVConnectio.cc, SSLNetProcessor.cc.. )

i need a pacth for 3.2.4 also.

Would you tell me when the patches will be included in newver version?

thanks.

 ssl_multicert.config not reread with traffic_line -x
 

 Key: TS-1551
 URL: https://issues.apache.org/jira/browse/TS-1551
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration, SSL
Affects Versions: 3.2.0
 Environment: RHEL 6
Reporter: Ethan Lai
Assignee: James Peach
Priority: Minor
 Fix For: 3.3.1

 Attachments: sslcertlookup.cc.patch-3.2.0, 
 ssl_multicert_reconfigure.patch


 Found that ssl_multicert.config is marked as modified, but not reread while 
 running traffic_line -x (Reread Config Files).
 Just wondering is this expected behavior or not?
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 [Oct 26 09:59:45.018] Manager {0x7f3c6723d7e0} NOTE: 
 [LocalManager::startProxy] Launching ts process
 [Oct 26 09:59:45.025] Manager {0x7f3c6723d7e0} NOTE: 
 [LocalManager::pollMgmtProcessServer] New process connecting fd '12'
 [Oct 26 09:59:45.025] Manager {0x7f3c6723d7e0} NOTE: [Alarms::signalAlarm] 
 Server Process born
 [Oct 26 09:59:46.066] Server {0x2b500a320680} DEBUG: (ssl) 
 ssl_multicert.config: /usr/local/etc/trafficserver/ssl_multicert.config
 [Oct 26 09:59:46.094] Server {0x2b500a320680} DEBUG: (ssl) mapping 
 'j1.free888.cloudns.biz' to certificate 
 /usr/local/etc/ats-cert/j1.free888.cloudns.biz-v2.pem
 [Oct 26 09:59:46.096] Server {0x2b500a320680} NOTE: logging initialized[15], 
 logging_mode = 3
 [Oct 26 09:59:46.126] Server {0x2b500a320680} NOTE: traffic server running
 $ sed -i 's/j1.free888.cloudns.biz-v2/j1.free888.cloudns.biz-v3/'  
 /usr/local/etc/trafficserver/ssl_multicert.config
 $ `trafflic_line -x`
 [Oct 26 09:59:59.954] Manager {0x7f3c5700} DEBUG: (rollback) 
 [Rollback::internalUpdate] Moving ssl_multicert.config from version 43 to 
 version 44
 [Oct 26 09:59:59.970] Manager {0x7f3c5700} NOTE: [fileUpdated] 
 ssl_multicert.config file has been modified
 [Oct 26 09:59:59.970] Manager {0x7f3c5700} NOTE: User has changed config 
 file ssl_multicert.config
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 No DEBUG: (ssl) mapping 'j1.free888.cloudns.biz' to certificate 
 /usr/local/etc/ats-cert/j1.free888.cloudns.biz-v3.pem message found.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1640) SSL certificate reconfiguration only works once

2013-02-21 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583875#comment-13583875
 ] 

jaekyung oh commented on TS-1640:
-

Hi James Peach.

can i get patch or diff? i'm using 3.2.4 on OpenSuse. op team can't reload the 
new ssl config by traffic_line -x.

 SSL certificate reconfiguration only works once
 ---

 Key: TS-1640
 URL: https://issues.apache.org/jira/browse/TS-1640
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, Management, SSL
Reporter: James Peach
Assignee: James Peach
 Fix For: 3.3.1


 Using traffic_line -x to update the SSL certificate configuration only works 
 the first time. This indicates a problem with the ConfigUpdateHandler code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541562#comment-13541562
 ] 

jaekyung oh commented on TS-1006:
-

Happy New Year.

after a week monitoring your last patch shows it's effective. At first, for a 
couple of days memory usage didn't stop increase but then it keeps going 
between minimum value of 72% and maximum value of 76%.

Even though I haven't applied 6th patch traffic server is stable now. Thank you.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541595#comment-13541595
 ] 

jaekyung oh commented on TS-1006:
-

sure.

physical mem of OS: 16G
ram_cache.size: 80 (8G)
reclaim_factor: 0.30
max_overage: 3

ram_cache.algorithm: 0

thanks.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-29 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13540843#comment-13540843
 ] 

jaekyung oh commented on TS-1006:
-

got it. i'll talk about ram_cache.size with op team. thank you for your 
immediate and easily understandable explanations.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-28 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13540739#comment-13540739
 ] 

jaekyung oh commented on TS-1006:
-

Hi Yunkai Zhang.

i'm afraid there is memory leak a little because the memory usage tends to 
increase. Now traffic server uses 8.7G bytes. Maybe it will use 9G tomorrow.

we set ram_cache.size 80, ram_cache_cutoff 60

part of yesterday debug logs :

[2aac3c888700:40][ink_queue.cc:00616][F] 8599.70M t:12861   f:71   m:71   
avg:70.9   M:12790 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:40][ink_queue.cc:00623][-] 8599.70M t:12791   f:1m:71   
avg:70.9   M:12790 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:41][ink_queue.cc:00631][M] 8599.70M t:15469   f:0m:1
avg:6.3M:15469 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c888700:41][ink_queue.cc:00634][+] 8599.70M t:15501   f:31   m:1
avg:6.3M:15469 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c888700:01][ink_queue.cc:00631][M] 8599.70M t:117925  f:0m:12   
avg:45.7   M:117925 csbase:256  csize:278  tsize:88 cbsize:24576
[2aac3c888700:01][ink_queue.cc:00634][+] 8599.70M t:118203  f:277  m:12   
avg:45.7   M:117925 csbase:256  csize:278  tsize:88 cbsize:24576
[2aac3c888700:40][ink_queue.cc:00631][M] 8599.70M t:12791   f:0m:1
avg:9.1M:12791 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:40][ink_queue.cc:00634][+] 8599.70M t:12855   f:63   m:1
avg:9.1M:12791 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:42][ink_queue.cc:00616][F] 8599.70M t:22873   f:18   m:16   
avg:16.1   M:22855 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c888700:42][ink_queue.cc:00623][-] 8599.70M t:22857   f:2m:16   
avg:16.1   M:22855 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c888700:43][ink_queue.cc:00616][F] 8599.70M t:4474f:37   m:26   
avg:31.2   M:4437 csbase:32   csize:31   tsize:32768  cbsize:1019904
[2aac3c888700:43][ink_queue.cc:00623][-] 8599.70M t:4443f:6m:26   
avg:31.2   M:4437 csbase:32   csize:31   tsize:32768  cbsize:1019904
[2aac3c888700:47][ink_queue.cc:00616][F] 8599.70M t:264 f:2m:1
avg:1.3M:262  csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c888700:47][ink_queue.cc:00623][-] 8599.19M t:263 f:1m:1
avg:1.3M:262  csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c888700:46][ink_queue.cc:00631][M] 8599.19M t:733 f:0m:0
avg:1.1M:733  csbase:32   csize:3tsize:262144 cbsize:790528
[2aac3c888700:46][ink_queue.cc:00634][+] 8599.19M t:736 f:2m:0
avg:1.1M:733  csbase:32   csize:3tsize:262144 cbsize:790528
[2aac3c888700:27][ink_queue.cc:00616][F] 8599.19M t:157 f:157  m:152  
avg:150.7  M:0csbase:128  csize:129  tsize:2048   cbsize:266240
[2aac3c888700:27][ink_queue.cc:00623][-] 8598.94M t:7   f:7m:152  
avg:150.7  M:0csbase:128  csize:129  tsize:2048   cbsize:266240
[2aac3c888700:41][ink_queue.cc:00616][F] 8598.94M t:15501   f:30   m:30   
avg:29.8   M:15471 csbase:32   csize:32   tsize:8192   cbsize:266240

current last debug logs :

[2aac3c989700:44][ink_queue.cc:00616][F] 8710.25M t:1461f:2m:1
avg:1.3M:1459 csbase:32   csize:15   tsize:65536  cbsize:987136
[2aac3c989700:44][ink_queue.cc:00623][-] 8710.25M t:1460f:1m:1
avg:1.3M:1459 csbase:32   csize:15   tsize:65536  cbsize:987136
[2aac3c989700:45][ink_queue.cc:00616][F] 8710.25M t:1536f:2m:1
avg:0.8M:1534 csbase:32   csize:7tsize:131072 cbsize:921600
[2aac3c989700:45][ink_queue.cc:00623][-] 8710.25M t:1536f:2m:1
avg:0.8M:1534 csbase:32   csize:7tsize:131072 cbsize:921600
[2aac3c989700:47][ink_queue.cc:00616][F] 8710.25M t:93  f:3m:2
avg:1.9M:90   csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c989700:47][ink_queue.cc:00623][-] 8709.75M t:92  f:2m:2
avg:1.9M:90   csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c989700:41][ink_queue.cc:00631][M] 8709.75M t:18766   f:0m:1
avg:0.3M:18766 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c989700:41][ink_queue.cc:00634][+] 8710.00M t:18798   f:31   m:1
avg:0.3M:18766 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c686700:42][ink_queue.cc:00616][F] 8710.00M t:26868   f:32   m:32   
avg:30.6   M:26836 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c686700:42][ink_queue.cc:00623][-] 8710.00M t:26838   f:2m:32   
avg:30.6   M:26836 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c686700:01][ink_queue.cc:00631][M] 8710.00M t:141808  f:0m:2
avg:6.2M:141808 csbase:256  csize:278  tsize:88 cbsize:24576
[2aac3c686700:01][ink_queue.cc:00634][+] 8710.02M t:142086  f:277  m:2
avg:6.2M:141808 csbase:256  csize:278  tsize:88 cbsize:24576

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-27 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13540248#comment-13540248
 ] 

jaekyung oh commented on TS-1006:
-

Hi Yunkai Zhang!

it's great!! it' been only one day since i applied your last patch. it's very 
hopeful.

First the memory usage had been increasing up before it reached 8G.

Since that time traffic server keeps memory usage around 8G for 12 hours. 
That's what we want! Thank you so much. Great job!!

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 

[jira] [Comment Edited] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539813#comment-13539813
 ] 

jaekyung oh edited comment on TS-1006 at 12/27/12 3:20 AM:
---

your patches looks fine but regretfully it doesn't look fundamental solution.

because the memory usage still keep increasing every moment and finally we 
couldn't help but restart traffic server today because memory usage reached 90% 
of 16G.

in spite of your new patches it seems we have to restart traffic server every 
1~2 days to avoid system crash might be caused by run out of memory. it's same 
situation before your patches

Thansk you again.

  was (Author: genext):
your patches looks fine but regretfully it doesn't look fundamental 
solution.

because the memory usage still keep increasing every moment and finally we 
couldn't help but restart traffic server today because memory usage reached 90% 
of 16G. we

in spite of your new patches it seems we have to restart traffic server every 
1~2 days to avoid system crash might be caused by run out of memory. it's same 
situation before your patches

Thansk you again.
  
 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539813#comment-13539813
 ] 

jaekyung oh commented on TS-1006:
-

your patches looks fine but regretfully it doesn't look fundamental solution.

because the memory usage still keep increasing every moment and finally we 
couldn't help but restart traffic server today because memory usage reached 90% 
of 16G. we

in spite of your new patches it seems we have to restart traffic server every 
1~2 days to avoid system crash might be caused by run out of memory. it's same 
situation before your patches

Thansk you again.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539851#comment-13539851
 ] 

jaekyung oh commented on TS-1006:
-

i don't know what value will be adequate. The value was 

CONFIG proxy.config.allocator.enable_reclaim INT 1
CONFIG proxy.config.allocator.reclaim_factor FLOAT 0.20
CONFIG proxy.config.allocator.max_overage INT 100

if i should adjust these values like below is it ok?
CONFIG proxy.config.allocator.reclaim_factor FLOAT 0.50
CONFIG proxy.config.allocator.max_overage INT 10

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-25 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539466#comment-13539466
 ] 

jaekyung oh commented on TS-1006:
-

i've applied your new patches this morning and no problem yet. it passed 4G 
where the error happened last time. it looks fine now. i'll let you know if 
something else happens.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-24 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539224#comment-13539224
 ] 

jaekyung oh commented on TS-1006:
-

i've tried again in accordance with your guide.

[2ba8ae8cc700:43][ink_queue.cc:00601][-] all:4011MB  t:1024f:8 m:40
avg:40.0malloc:1016csize:32tsize:32768   cbsize:1052672
[2ba8adbb7e20:40][ink_queue.cc:00595][M] all:4029MB  t:767 f:87m:86
avg:84.8malloc:680 csize:64tsize:4096cbsize:266240
[2ba8adbb7e20:40][ink_queue.cc:00601][-] all:4029MB  t:703 f:23m:86
avg:84.8malloc:680 csize:64tsize:4096cbsize:266240
[2ba8ae8cc700:03][ink_queue.cc:00668][F] all:4044MB  t:628 f:108   m:108   
avg:107.1   malloc:519 csize:70tsize:232 cbsize:16384
[2ba8ae8cc700:03][ink_queue.cc:00674][-] all:4044MB  t:557 f:38m:108   
avg:107.1   malloc:519 csize:70tsize:232 cbsize:16384
/usr/local/bin/traffic_server - STACK TRACE:
/usr/local/lib/libtsutil.so.3(ink_freelist_new+0x992)[0x2ba8ab4b5072]
/usr/local/bin/traffic_server[0x64a68f]
/usr/local/bin/traffic_server(_ZN7CacheVC10handleReadEiP5Event+0x1e2)[0x64ef52]
/usr/local/bin/traffic_server(_ZN5Cache9open_readEP12ContinuationP7INK_MD5P7HTTPHdrP21CacheLookupHttpConfig13CacheFragTypePci+0x754)[0x67ef54]
/usr/local/bin/traffic_server(_ZN14CacheProcessor9open_readEP12ContinuationP3URLP7HTTPHdrP21CacheLookupHttpConfigl13CacheFragType+0x130)[0x6540f0]
/usr/local/bin/traffic_server(_ZN11HttpCacheSM9open_readEP3URLP7HTTPHdrP21CacheLookupHttpConfigl+0x74)[0x521c14]
/usr/local/bin/traffic_server(_ZN6HttpSM24do_cache_lookup_and_readEv+0xfd)[0x538c1d]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0xb20)[0x54ffd0]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x33a)[0x547fda]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x538)[0x54f9e8]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0xb92)[0x550042]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x33a)[0x547fda]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x538)[0x54f9e8]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x33a)[0x547fda]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x9a)[0x55091a]
/usr/local/bin/traffic_server(TSHttpTxnReenable+0x40d)[0x4bec1d]
/usr/local/libexec/trafficserver/purge.so(+0x1e56)[0x2ba8b99bfe56]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0xfb)[0x54927b]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x538)[0x54f9e8]
/usr/local/bin/traffic_server(_ZN6HttpSM32setup_client_read_request_headerEv+0x39f)[0x54793f]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x25a)[0x547efa]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM21attach_client_sessionEP17HttpClientSessionP14IOBufferReader+0x688)[0x54a4e8]
/usr/local/bin/traffic_server(_ZN17HttpClientSession16state_keep_aliveEiPv+0xa8)[0x523738]
/usr/local/bin/traffic_server[0x6b6f4a]
/usr/local/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x1fe)[0x6aff0e]
/usr/local/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8e)[0x6dfd4e]
/usr/local/bin/traffic_server(_ZN7EThread7executeEv+0x4f0)[0x6e06c0]
/usr/local/bin/traffic_server[0x6df9b2]
/lib64/libpthread.so.0(+0x6a3f)[0x2ba8ab6dca3f]
/lib64/libc.so.6(clone+0x6d)[0x2ba8ad91a67d]
FATAL: Failed to mmap 50331648 bytes, Cannot allocate memory 
- here you 
want?
/usr/local/bin/traffic_server - STACK TRACE:
/usr/local/lib/libtsutil.so.3(ink_fatal+0x88)[0x2ba8ab4b0c68]
/usr/local/lib/libtsutil.so.3(+0x17aca)[0x2ba8ab4b3aca]
/usr/local/lib/libtsutil.so.3(ink_freelist_new+0x992)[0x2ba8ab4b5072]
/usr/local/bin/traffic_server[0x64a68f]
/usr/local/bin/traffic_server(_ZN7CacheVC10handleReadEiP5Event+0x1e2)[0x64ef52]

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production,