Re: [squid-users] Caching URLs with a ? in them?

2013-02-13 Thread Scott Baker
On 02/12/2013 04:51 PM, Amos Jeffries wrote:
 I have a bunch of static content with appropriate Expires headers, but
 the URL contains a ?serial=123456 where the serial number is dynamic.
 Is squid smart enough to ignore the fact that the URL looks like a
 dynamic request,
 
 It *is* a dynamic request. Look see ... the URL is constantly changing.

The URL ONLY changes for logging purposes. The content being served is
static. The serial number is ONLY preset so I can comb the logs and find
who/when picked up a resource.

   and use the expire headers to see that it's indeed
 static/cacheable content?
 
 Expires is relative to the URL. So if the URL changed its a *new* object
 (MISS) with new Expiry details. Get the picture?
 
 
 see http://wiki.squid-cache.org/ConfigExamples/DynamicContent for teh
 configuration directives to change for cachign these responses. If you
 have a new install of Squid-3.1 or later the default settings will cache
 them.
 
 However, once you have them cached, you will probably still see a lot of
 MISS happening because the URL are changing. For best cache HIT rate you
 need to look at why those serial exist at all in the URL. They are
 breaking the cacheability for you and everyone else on the Internet. Do
 you have control over the origin server generating those URLs? If you
 could explain what the serial is for exactly perhapse we could point you
 in the direction of fixing the object cacheability.


-- 
Scott Baker - Canby Telcom
System Administrator - RHCE - 503.266.8253


Re: [squid-users] Caching URLs with a ? in them?

2013-02-13 Thread Scott Baker
On 02/13/2013 11:48 AM, Dave Dykstra wrote:
 Scott,
 
 If it's just for logging purposes, it would be better to use an http
 header such as User-Agent rather than putting it in the URL.  It is part
 of the http standard to use the whole URL as a caching index.


Good point... I can send a header instead. Thanks

-- 
Scott Baker - Canby Telcom
System Administrator - RHCE - 503.266.8253


[squid-users] Caching URLs with a ? in them?

2013-02-12 Thread Scott Baker
I have a bunch of static content with appropriate Expires headers, but
the URL contains a ?serial=123456 where the serial number is dynamic.
Is squid smart enough to ignore the fact that the URL looks like a
dynamic request, and use the expire headers to see that it's indeed
static/cacheable content?

-- 
Scott Baker - Canby Telcom
System Administrator - RHCE - 503.266.8253


[squid-users] cache_dir on /dev/shm?

2013-02-06 Thread Scott Baker
I want to store all my cache dir on a ram disk. I have this in my config:

cache_dir ufs /dev/shm/squid 1024 16 256

Is there any reason NOT to do this? All the stuff in my cache is time
sensitive (expires after 15 minutes) so I'm not worried about losing
data, just worried about best practices.

I also have:

cache_mem 1024 MB

in my config. With ONLY this entry, everything results in a TCP_MISS.
I'm guessing the cache_mem isn't used the same way as a cache_dir?

-- 
Scott Baker - Canby Telcom
System Administrator - RHCE - 503.266.8253


Re: [squid-users] Caching HLS content?

2013-02-01 Thread Scott Baker
On 01/31/2013 04:30 PM, Leonardo Rodrigues wrote:

 an even better approach would be correctly setup your webserver to
 send the appropriate expire times for the .m3u8 files so your caches
 neither any other one would cache them :)

 a correctly expire time for the .ts could be sent as well,
 allowing them to be cached
Oh that makes way more sense. I don't know why I didn't think of that!
When I set that my TCP_REFRESH_UNMODIFIED requests disappear!

-- 
Scott Baker - Canby Telcom 
System Administrator - RHCE - 503.266.8253



[squid-users] Reverse cache for HLS streaming

2013-01-31 Thread Scott Baker
I'm trying to setup Squid as a reverse proxy to cache HLS segments. We
have a very controlled environment, so I'd like it to cache every .ts
file it sees, and not cache every .m3u8 file it sees. I have a pretty
generic configuration (I think) and it seems that it's not caching anything?

I don't see any reason it WOULDN'T cache the files. The headers all
indicate that it's cacheable I think.

-

http_port 80 accel defaultsite=hls2.domain.tv no-vhost ignore-cc
cache_peer master-streamer.domain.tv parent 80 0 no-query originserver
name=myAccel no-digest

acl our_sites dstdomain hls2.domain.tv
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 2000 16 256
cache_mem 1024 MB

-

1359655780.097 45 65.182.224.20 TCP_MISS/206 1607080 GET
http://hls2.domain.tv/katu/katu_996_92564.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655787.167 41 65.182.224.20 TCP_MISS/206 1607080 GET
http://hls2.domain.tv/katu/katu_996_92564.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655792.110 42 65.182.224.20 TCP_MISS/206 1563276 GET
http://hls2.domain.tv/katu/katu_996_92565.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655799.181 40 65.182.224.20 TCP_MISS/206 1563276 GET
http://hls2.domain.tv/katu/katu_996_92565.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655804.114 37 65.182.224.20 TCP_MISS/206 1565532 GET
http://hls2.domain.tv/katu/katu_996_92566.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655811.188 37 65.182.224.20 TCP_MISS/206 1565532 GET
http://hls2.domain.tv/katu/katu_996_92566.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655816.133 39 65.182.224.20 TCP_MISS/206 1610088 GET
http://hls2.domain.tv/katu/katu_996_92567.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655823.204 37 65.182.224.20 TCP_MISS/206 1610088 GET
http://hls2.domain.tv/katu/katu_996_92567.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655828.139 37 65.182.224.20 TCP_MISS/206 1580948 GET
http://hls2.domain.tv/katu/katu_996_92568.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655835.214 39 65.182.224.20 TCP_MISS/206 1580948 GET
http://hls2.domain.tv/katu/katu_996_92568.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T

-

 HTTP/1.1 200 OK
 Date: Thu, 31 Jan 2013 18:11:44 GMT
 Server: Apache/2.2.22 (Fedora)
 Last-Modified: Thu, 31 Jan 2013 18:11:04 GMT
 ETag: 800182-181de4-4d4998cd170d4
 Accept-Ranges: bytes
 Content-Length: 1580516
 Content-Type: video/MP2T
 X-Cache: MISS from hls2.domain.tv
 X-Cache-Lookup: MISS from hls2.domain.tv:80
 Via: 1.1 hls2.domain.tv (squid/3.2.5)
 Connection: keep-alive

-- 
Scott Baker - Canby Telcom
System Administrator - RHCE - 503.266.8253


Re: [squid-users] Reverse cache for HLS streaming

2013-01-31 Thread Scott Baker
On 01/31/2013 01:41 PM, Eliezer Croitoru wrote:
 I have seen your logs and it seems like you have user CURL to fetch a
 simple get while the clients are requesting a partial content of the
 video and squid in any version dosn't cache them yet.
 You can try other alternatives that can offer you this kind of feature
 or reassess the way your application works.

It appears my issues was time related. The origin server I was using was
about 2 minutes ahead of my squid server. I think this was causing it
not to cache anything. When I put NTP on both of them, the issue went away.

-- 
Scott Baker - Canby Telcom 
System Administrator - RHCE - 503.266.8253



[squid-users] Caching HLS content?

2013-01-31 Thread Scott Baker
I want to make sure that .m3u8 files are *never* cached. Those files are
updated every 5 seconds on my server, and always have the same name.
What is the best way to make sure that they are never cached? This is
what I came up with:

refresh_pattern \.m3u8  0   0%  0

Conversely, MPEG segments are .ts files and are ALWAYS the same. The
file names roll, so once an mpeg segment is created it will *never* get
updated. Thus those files should ALWAYS be cached, and there is no
reason to ever refresh the file. How do I ensure that .ts segments are
cached, and there is no reason to re-validate them. These pieces of
content will expire after 15 minutes (it's live video), so there is no
reason to keep any .ts files that are older than 15 minutes.  This is
what I came up with:

refresh_pattern \.ts900   100%900

Currently I'm seeing a lot of TCP_REFRESH_UNMODIFIED in my logs for .ts
segments. I must be doing something wrong.

-- 
Scott Baker - Canby Telcom
System Administrator - RHCE - 503.266.8253


[squid-users] ip_wccp problems

2003-11-07 Thread Scott Baker
I compiled my own kernel (2.4.22) and the ip_wccp module on my RedHat 9 
system as follows

gcc -D__KERNEL__ -I/usr/src/linux-2.4.22/include -Wall -Wstrict-prototypes 
-Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer 
-pipe -mpreferred-stack-boundary=2 -march=i686   -nostdinc -iwithprefix 
include -DKBUILD_BASENAME=sched  -fno-omit-frame-pointer -c -o ip_wccp.o 
ip_wccp.c

So now I have ip_wccp.o which I put in my /lib/modules/ directory, but it 
will NOT load to save my life.

---

[EMAIL PROTECTED] linux]# modprobe ip_wccp
/lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: couldn't find the kernel 
version the module was compiled for
/lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: insmod 
/lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o failed
/lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: insmod ip_wccp failed

---

[EMAIL PROTECTED] linux]# insmod /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o
/lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: couldn't find the kernel 
version the module was compiled for

---

[EMAIL PROTECTED] linux]# uname -a
Linux localhost.localdomain 2.4.22 #5 Wed Nov 5 09:37:47 PST 2003 i686 i686 
i386 GNU/Linux

---

Am I missing something here?

---
Scott Baker - Webster Internet
Network Engineer - RHCE
[EMAIL PROTECTED] - 503.266.8253 



[squid-users] Transparent Cache (WCCP) without ip_wccp

2003-11-04 Thread Scott Baker
I'm trying to get some transparent caching with WCCP setup through my Cisco 
router and I want to do WCCP.  Everything I've read seems to point at that 
as being the best solution.  For various reasons I'm having problems 
getting ip_wccp to compile with the specialized kernel that I have.  I'm 
wondering if it's possible to get WCCP working without using that kernel 
module.

I'm able to get the GRE tunnel up, and squid is configured but the router 
doesn't see the squid box because it's not announcing itself via WCCP.  Is 
there another way to get the two to talk?

---
Scott Baker - Webster Internet
Network Engineer - RHCE
[EMAIL PROTECTED] - 503.266.8253