Re: [squid-users] Re: Caching netflix by Mime headers
Yes, but using quick_abort -1 will dissable movie seeking? For example, if I'm at movie minute 10, and i move to minute 90. Squid start downloading all movie. It is not functional. LD 2013/2/17 Amos Jeffries squ...@treenet.co.nz: On 18/02/2013 12:36 p.m., Luis Daniel Lucio Quiroz wrote: Dont get me wrong, Im preparing a cache to save bandwidth on classic home who has already netflix suscription, but you have kids. You know kids watch again and again same picture, there is where we can save bandwidth. Thats why i want to cache the ISMV/ISMA flux. What I have realized is that the same request is always done by same pair movie-device. Well, the Range responses won't cache, but can be served out of cache so your attempt at quick_abort was on the right track. On top of that you also need refresh_pattern ignore-no-store on the file path to cause the full copy Amos
[squid-users] Re: Caching netflix by Mime headers
quick_abort_min -1 range_offset_limit -1 May work on youtube, but not in netflix. It starts downloading ALL movie, even when you seek. 2013/2/17 Luis Daniel Lucio Quiroz luis.daniel.lu...@gmail.com: Maybe OT, maybe a dream, but I need to ask Turning on mime headers you will realize this 1361122064.970 5480 192.168.7.137 TCP_MISS/206 981083 GET http:/#quick_abort_min -1 #range_offset_limit -1 /108.175.38.89/12348119.ismv? - HIER_DIRECT/108.175.38.89 application/octet-stream [Accept: */*\r\nHost: 108.175.38.89\r\nRange: bytes=3498123192-3499103587\r\nX-Device: 2012.4 NFPS3-001\r\n] [HTTP/1.1 206 Partial Content\r\nServer: nginx/1.2.4\r\nDate: Sun, 17 Feb 2013 17:31:07 GMT\r\nContent-Type: application/octet-stream\r\nContent-Length: 980396\r\nLast-Modified: Mon, 03 Dec 2012 15:00:39 GMT\r\nConnection: keep-alive\r\nCache-Control: no-store\r\nPragma: no-cache\r\nAccess-Control-Allow-Origin: *\r\nX-TCP-Info: snd_wscale=7;rcv_wscale=9;snd_mss=524;rcv_mss=524;last_data_recv=1000;rtt=46187;rttvar=19875;snd_ssthresh=3668;snd_cwnd=60784;snd_wnd=789248;rcv_wnd=1049048;snd_rexmitpack=186;rcv_ooopack=0;snd_zerowin=0;\r\nContent-Range: bytes 3498123192-3499103587/3986579703\r\n\r] Interesting part is the Range: I know it is quite a bad idea to download all movie first (possible, like youtube) but Netflix movies are so big that people will get desesperated before download has finished. So my question is if possible to do cache of 206 answers, keeping in mind the mime headers LD
Re: [squid-users] Re: Caching netflix by Mime headers
You will need more then just one or two lines of logs and data to determine that. I don't know a thing about how netflix players do their stuff but I doubt they will make it simple as cache it using basic squid. Eliezer On 2/17/2013 9:01 PM, Luis Daniel Lucio Quiroz wrote: I turn on more loggin and i realize this 1361126274.457 66976 192.168.7.134 TCP_MISS/206 18439445 GET http://108.175.42.86/658595255.ismv?c=can=812v=3e=1361155197t=L_cj-INb4sDdWF9RHoaOwwjBg7od=androidp=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM - HIER_DIRECT/108.175.42.86 application/octet-stream 1361126280.021 72537 192.168.7.134 TCP_MISS/206 1095098 GET http://108.175.42.86/658618947.isma?c=can=812v=3e=1361155197t=_I4PVA3JkFpFxS90V8qgmM1Q-OUd=androidp=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM - HIER_DIRECT/108.175.42.86 application/octet-stream My question is, if i force caching of \d+\.ism[av] files, the ? payload will be clashed or will diferenciate a?b, and a?c for example I hope to be clear LD -- Eliezer Croitoru http://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
Re: [squid-users] Re: Caching netflix by Mime headers
On 18/02/2013 8:58 a.m., Eliezer Croitoru wrote: You will need more then just one or two lines of logs and data to determine that. Sadly those lines are enough to say that Squid does not currently have Range support. So Squid can't cache those 206 responses yet anyway - even if more complicated tricks are used to avoid other request differences. I don't know a thing about how netflix players do their stuff but I doubt they will make it simple as cache it using basic squid. Netflix appear to be one of the cache-friendly providers. Some smart cookies over there are using cache controls and HTTP bandwidth reduction features *properly* for once. I advise leaving their traffic alone. Yes their site uses a lot of bandwidth, but these *are* large HD movies with per-user licensing embeded. The binaries *actually* can't be shared by multiple users - making them non-cacheable in most cases. Note that due to bandwidth costs Netflix themselves have an ongoing vested interest in improving cacheability of their content wherever possible. Eliezer On 2/17/2013 9:01 PM, Luis Daniel Lucio Quiroz wrote: I turn on more loggin and i realize this 1361126274.457 66976 192.168.7.134 TCP_MISS/206 18439445 GET http://108.175.42.86/658595255.ismv?c=can=812v=3e=1361155197t=L_cj-INb4sDdWF9RHoaOwwjBg7od=androidp=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM - HIER_DIRECT/108.175.42.86 application/octet-stream 1361126280.021 72537 192.168.7.134 TCP_MISS/206 1095098 GET http://108.175.42.86/658618947.isma?c=can=812v=3e=1361155197t=_I4PVA3JkFpFxS90V8qgmM1Q-OUd=androidp=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM - HIER_DIRECT/108.175.42.86 application/octet-stream My question is, if i force caching of \d+\.ism[av] files, the ? payload will be clashed or will diferenciate a?b, and a?c for example Both the *.ism* and the t=* pieces of that URI are changing between those requests. Do you know exactly what those pieces mean? in particular do you *know* they are safe to remove? ... if you say yes, you are probably wrong. One seems to be an audio stream and the other a video stream. IMO, you may be able to alias the IP address back to a hostname using storeID feature now in squid-3 (but not the Store-URL versionin 2.7) to de-duplicate. But that is just another guess as well. Remember that what you risk when getting it wrong: - responding with movie A to movie B requests (worst case: movie A being XXX rated and movie B a kids flick.) - Also, DRM licensing *inside* the media risks that user receiving a HIT cannot play it after a huge bandwidth wasting D/L. - Also, loss or crossover of video or audio streams. None of which are great experiences for your users or your helpdesk. Not everybody is out to break your cache. You could be doing it to yourself without any need. Amos
Re: [squid-users] Re: Caching netflix by Mime headers
Dont get me wrong, Im preparing a cache to save bandwidth on classic home who has already netflix suscription, but you have kids. You know kids watch again and again same picture, there is where we can save bandwidth. Thats why i want to cache the ISMV/ISMA flux. What I have realized is that the same request is always done by same pair movie-device. LD 2013/2/17 Amos Jeffries squ...@treenet.co.nz: On 18/02/2013 8:58 a.m., Eliezer Croitoru wrote: You will need more then just one or two lines of logs and data to determine that. Sadly those lines are enough to say that Squid does not currently have Range support. So Squid can't cache those 206 responses yet anyway - even if more complicated tricks are used to avoid other request differences. I don't know a thing about how netflix players do their stuff but I doubt they will make it simple as cache it using basic squid. Netflix appear to be one of the cache-friendly providers. Some smart cookies over there are using cache controls and HTTP bandwidth reduction features *properly* for once. I advise leaving their traffic alone. Yes their site uses a lot of bandwidth, but these *are* large HD movies with per-user licensing embeded. The binaries *actually* can't be shared by multiple users - making them non-cacheable in most cases. Note that due to bandwidth costs Netflix themselves have an ongoing vested interest in improving cacheability of their content wherever possible. Eliezer On 2/17/2013 9:01 PM, Luis Daniel Lucio Quiroz wrote: I turn on more loggin and i realize this 1361126274.457 66976 192.168.7.134 TCP_MISS/206 18439445 GET http://108.175.42.86/658595255.ismv?c=can=812v=3e=1361155197t=L_cj-INb4sDdWF9RHoaOwwjBg7od=androidp=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM - HIER_DIRECT/108.175.42.86 application/octet-stream 1361126280.021 72537 192.168.7.134 TCP_MISS/206 1095098 GET http://108.175.42.86/658618947.isma?c=can=812v=3e=1361155197t=_I4PVA3JkFpFxS90V8qgmM1Q-OUd=androidp=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM - HIER_DIRECT/108.175.42.86 application/octet-stream My question is, if i force caching of \d+\.ism[av] files, the ? payload will be clashed or will diferenciate a?b, and a?c for example Both the *.ism* and the t=* pieces of that URI are changing between those requests. Do you know exactly what those pieces mean? in particular do you *know* they are safe to remove? ... if you say yes, you are probably wrong. One seems to be an audio stream and the other a video stream. IMO, you may be able to alias the IP address back to a hostname using storeID feature now in squid-3 (but not the Store-URL versionin 2.7) to de-duplicate. But that is just another guess as well. Remember that what you risk when getting it wrong: - responding with movie A to movie B requests (worst case: movie A being XXX rated and movie B a kids flick.) - Also, DRM licensing *inside* the media risks that user receiving a HIT cannot play it after a huge bandwidth wasting D/L. - Also, loss or crossover of video or audio streams. None of which are great experiences for your users or your helpdesk. Not everybody is out to break your cache. You could be doing it to yourself without any need. Amos
Re: [squid-users] Re: Caching netflix by Mime headers
On 18/02/2013 12:36 p.m., Luis Daniel Lucio Quiroz wrote: Dont get me wrong, Im preparing a cache to save bandwidth on classic home who has already netflix suscription, but you have kids. You know kids watch again and again same picture, there is where we can save bandwidth. Thats why i want to cache the ISMV/ISMA flux. What I have realized is that the same request is always done by same pair movie-device. Well, the Range responses won't cache, but can be served out of cache so your attempt at quick_abort was on the right track. On top of that you also need refresh_pattern ignore-no-store on the file path to cause the full copy Amos