[ 
https://issues.apache.org/jira/browse/TS-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143318#comment-14143318
 ] 

Sudheer Vinukonda commented on TS-2314:
---------------------------------------

[~zym] : 

Not sure to understand the comment - enabling rww 
(proxy.config.cache.enable_read_while_writer=1) allows to deliver from cache 
the partial byte ranges that are already in the cache. For byte ranges that are 
not in the cache yet, the present rww implementation blocks the request, until 
the requested range is available in the cache. 

For example, if you are downloading a file 1000 bytes (for example) in size,

Assume that you have downloaded 500 bytes of the file so far and there's a 
separate request with partial byte range:

1. rww=0, range request for bytes:1-100 or bytes: 600-700 trigger a new origin 
request
2. rww=1, range request for bytes:1-100 returns immediately from cache, while 
bytes:600-700 stalls until that range is available in the cache

This is not convenient for large video downloads (assume you are downloading a 
5G video and someone seeks the last 10MB), as the request could get potentially 
blocked for a while. 

The enhancement in this jira allows to fetch such unsatisfiable range requests 
directly from the origin, rather than stalling the request when rww is set to 2 
(proxy.config.cache.enable_read_while_writer=2).

iow, 

3. rww=2, range request for bytes:1-100 returns immediately from cache, while 
bytes:600-700 forces an origin request


> New config to allow unsatifiable Range: request to go straight to Origin
> ------------------------------------------------------------------------
>
>                 Key: TS-2314
>                 URL: https://issues.apache.org/jira/browse/TS-2314
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core
>            Reporter: jaekyung oh
>              Labels: range
>         Attachments: TS-2314.diff
>
>
> Basically read_while_writer works fine when ATS handles normal file.
> In progressive download and playback of mp4 in which moov atom is placed at 
> the end of the file, ATS makes and returns wrong response for range request 
> from unfulfilled cache when read_while_writer is 1.
> In origin, apache has h264 streaming module. Everything is ok whether the 
> moov atom is placed at the beginning of the file or not in origin except a 
> range request happens with read_while_writer.
> Mostly our customer’s contents placed moov atom at the end of the file and in 
> the case movie player stops playing when it seek somewhere in the movie.
> to check if read_while_writer works fine,
> 1. prepare a mp4 file whose moov atom is placed at the end of the file.
> 2. curl --range xxxx-xxxx http://www.test.com/mp4/test.mp4 1> 
> no_cache_from_origin 
> 3. wget http://www.test.com/mp4/test.mp4
> 4. right after wget, execute “curl --range xxxx-xxxx 
> http://www.test.com/mp4/test.mp4 1> from_read_while_writer” on other terminal
> (the point is sending range request while ATS is still downloading)
> 5. after wget gets done, curl --range xxxx-xxxx 
> http://www.test.com/mp4/test.mp4 1> from_cache
> 6. you can check compare those files by bindiff.
> The response from origin(no_cache_from_origin) for the range request is 
> exactly same to from_cache resulted from #5's range request. but 
> from_read_while_writer from #4 is totally different from others.
> i think a range request should be forwarded to origin server if it can’t find 
> the content with the offset in cache even if the read_while_writer is on, 
> instead ATS makes(from where?) and sends wrong response. (In squid.log it 
> indicates TCP_HIT)
> That’s why a movie player stops when it seeks right after the movie starts.
> Well. we turned off read_while_writer and movie play is ok but the problems 
> is read_while_writer is global options. we can’t set it differently for each 
> remap entry by conf_remap.
> So the downloading of Big file(not mp4 file) gives overhead to origin server 
> because read_while_writer is off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to