Re: [squid-users] Inelegant routing based on file size

2008-10-20 Thread Regardt van de Vyver

Vernon Kennedy-Van Dam wrote:

Thanx very much Amos. Much appreciated with the feedback.


  

Hi All

I am looking to route download traffic based on file size of the download
requests. If a user in our network was to download a 10mb file, he gets
routed through link 1. If a user requests a 100mb file download, he gets
routed through link 2.

How is this achieved?



It can not. File size is not known until after the file starts arriving.
Sometimes not even until it has finished arriving.

The best you may possibly do is create a custom external ACL helper to
scan store.log for previous file sizes of requested URL. Then use
tcp_outgoing_address based on a best-guess.

This however breaks completely on:
* new and unknown URL,
*  changed URLs,
*  dynamic URL (very common!),
*  and most websites updated by their webmaster between your visits
(almost as common as dynamic URLs).

Amos

  
Well, we used a rather complicated way to achive something similar - so 
it seems possible.


Use url_rewrite_program to redirect the request to an inspection script.

The inspection script then fetches the HTTP headers of the file to 
determine file size. If small enough or not indicated, the script 
returns the url untouched so that the local cache processes it.


If the file is larger, you can redirect to an alternate url/script that 
can do the fetching. The only thing I've not tested, is possibly making 
the redirect send to another proxy server, not entirely sure of that syntax.


But you get the general idea.




[squid-users] Strange entries in cache.log (3.0.STABLE10)

2008-10-20 Thread Malte Schröder
Hello,
I am seeing entries like below quite frequently. It looks to me as if the 
Content-Language and Content-Location headers are not parsed correctly, since I 
cannot see this stuff in the traffic going to the squid. In this config Squid 
has a WebWasher installation as parent and has an imagefilter as 
ICAP-respmod.


2008/10/20 10:52:15| WARNING: unparseable HTTP header field 
{Content-Language123453}
2008/10/20 10:52:15|  in {Accept-Ranges: bytes
Cache-Control: max-age=31536000
Content-Language123453
Content-Type: image/png
Date: Mon, 20 Oct 2008 08:52:14 GMT
Expires: Tue, 20 Oct 2009 08:52:14 GMT
Last-Modified: Fri, 12 Sep 2008 19:34:25 GMT
Proxy-Connection: close
Server: Apache
}
2008/10/20 10:52:15| WARNING: unparseable HTTP header field 
{Content-Language34536}
2008/10/20 10:52:15|  in {Accept-Ranges: bytes
Cache-Control: max-age=31536000
Content-Language34536
Content-Type: image/png
Date: Mon, 20 Oct 2008 08:52:15 GMT
Expires: Tue, 20 Oct 2009 08:52:15 GMT
Last-Modified: Fri, 12 Sep 2008 19:34:25 GMT
Proxy-Connection: close
Server: Apache
}

2008/10/20 08:18:18| WARNING: unparseable HTTP header field 
{Content-Location107028}
2008/10/20 08:18:18|  in {Accept-Ranges: bytes
Content-Length: 107028
Content-Location107028
Content-Type: image/jpeg
Date: Mon, 20 Oct 2008 06:18:18 GMT
ETag: "923c75aa5230c91:237d"
Last-Modified: Fri, 17 Oct 2008 12:19:56 GMT
Proxy-Connection: keep-alive
Server: Microsoft-IIS/6.0
Via: 1.1 webwasher (Webwasher 6.8.2.3889)
X-Powered-By: ASP.NET
}


[squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Aviral Pandey
My server doesn't support gzip and hence serves content in 
unzipped/raw/deflate format.
When my client asks for Accept-Encoding: gzip, deflate, squid is sending 
it raw content. Shouldn't it gzip and then send? Is there a way in which 
this can be achieved?


Thanks,
Avi



Re: [squid-users] Squid conf for live video stream

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 19:13 +1300, Amos Jeffries wrote:
> You need to fix the VOD implementation to use cacheable URI. Or scream 
> at the vendors who wrote it so they fix it.

And most won't fix it as they regard this cache unfriendlyness as one of
the premium features of their system.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid conf for live video stream

2008-10-20 Thread viveksnv




Thanks Henrik,

All the videos are cachable. Because the video's are maintained by us.

But the origin server not near by us. So i try to cache and serve to 
customers quickly.


And also the objects are cached by squid. But performance is very slow. 
so i need some tips to improve the performance.






On mån, 2008-10-20 at 19:13 +1300, Amos Jeffries wrote:
You need to fix the VOD implementation to use cacheable URI. Or 

scream

at the vendors who wrote it so they fix it.


And most won't fix it as they regard this cache unfriendlyness as one of
the premium features of their system.

Regards
Henrik






__
__
You are invited to Get a Free AOL Email ID. - http://webmail.aol.in


Re: [squid-users] Strange entries in cache.log (3.0.STABLE10)

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 11:01 +0200, Malte Schröder wrote:
> Hello,
> I am seeing entries like below quite frequently. It looks to me as if the 
> Content-Language and Content-Location headers are not parsed correctly, since 
> I 
> cannot see this stuff in the traffic going to the squid. In this config Squid 
> has a WebWasher installation as parent and has an imagefilter as 
> ICAP-respmod.

Have you inspected the ICAP responses from imageFilter, and paired this
with the error? I suspect the error may be from there.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 15:23 +0530, Aviral Pandey wrote:
> When my client asks for Accept-Encoding: gzip, deflate, squid is sending 
> it raw content. Shouldn't it gzip and then send?

No, Content-Encoding (just as Content-Languate) is a task of webservers,
not semantically transparent proxies such as Squid.

> Is there a way in which 
> this can be achieved?

There is an addon for squid-3, but it apparently needs a bit of work to
apply to current Squid-3 sources.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Simon Waters
On Monday 20 October 2008 10:53:14 Aviral Pandey wrote:
> My server doesn't support gzip and hence serves content in
> unzipped/raw/deflate format.
> When my client asks for Accept-Encoding: gzip, deflate, squid is sending
> it raw content. Shouldn't it gzip and then send?

No, it could (in theory) compress but it doesn't have to. 

Indeed the RFCs suggest it probably shouldn't, but in reverse proxy 
configuration it may make sense for something in front of the webserver to be 
doing this. For most servers it is probably better to do this before the data 
is sent down ANY network cable given how fast modern CPUs and buses are, and 
how slow networks are.

> Is there a way in which this can be achieved?

My understanding is this is planned feature for 3mumble, there are a number of 
patches that do some of it - Henrik can probably update folk on what works.

http://devel.squid-cache.org/projects.html#gzip

Neither nginx or varnish do this either I believe. Although I've seen 
suggestions nginx can, so perhaps it just needs a bit of fiddling.

Apache will do this as a reverse proxy, but Apache as a reverse proxy is 
interesting - most places I've seen it done it is sitting on the webserver on 
port 80 forwarding to less capable webservers on another port. Although 
Apache can do quite a lot as a reverse proxy the first person I saw who tried 
to set it up created an open proxy - so be careful.

What webserver is it that doesn't support compression?

I looked at this somewhat, because we have an application that serves 
uncompressed content which makes it slow and bandwidth hungry, but in our 
case it turns out it is a bug in the specific Apache version we are using 
(Debian Sarge with fastcgi) that prevents the compress filter in Apache 
working as it should, so I plan just to upgrade it to something supported by 
Debian security team when it will "just work" (tested already).



Re: [squid-users] Squid conf for live video stream

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 06:20 -0400, [EMAIL PROTECTED] wrote:

> All the videos are cachable. Because the video's are maintained by us.
> 
> But the origin server not near by us. So i try to cache and serve to 
> customers quickly.

So fix the origin. Alternatively you can play games with a url rewriter
to canonialize the requested URLs.

The easiest way to acheive what you want is to NOT use a video streaming
server for distributing the videos. Instead store the videos as plain
files on an HTTP server.

Sorry for being a bit dense in the response. If you want more precise
answers then provide more information on what requests & responses look
like, and why.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Aviral Pandey
Thanks Henrik...But I have seen vice-versa to be working i.e., server 
sending a gzipped response and squid serving deflated one when client 
asks for deflated content


My squid is 2.6-16.


Thanks,
Avi
http://aviral.piworx.com

Henrik Nordstrom wrote:

On mån, 2008-10-20 at 15:23 +0530, Aviral Pandey wrote:
  
When my client asks for Accept-Encoding: gzip, deflate, squid is sending 
it raw content. Shouldn't it gzip and then send?



No, Content-Encoding (just as Content-Languate) is a task of webservers,
not semantically transparent proxies such as Squid.

  
Is there a way in which 
this can be achieved?



There is an addon for squid-3, but it apparently needs a bit of work to
apply to current Squid-3 sources.

Regards
Henrik
  




Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 16:08 +0530, Aviral Pandey wrote:
> Thanks Henrik...But I have seen vice-versa to be working i.e., server 
> sending a gzipped response and squid serving deflated one when client 
> asks for deflated content

This is not available in any Squid version.

But Squid do support servers doing this correctly, by caching both
gzip:ed and plain variants of the resource.

All servers I know of supporting gzip also supports serving plain
variants when the client do not support gzip.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Re: acl deny in transparent cache

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 05:42 -0500, Jian Wang wrote:
> I'm not sure how to do this. Is it externally changing the
> configuration of the router? For example,
> in the Squid external_acl_helper code, telnet to the router and add an
> acl line to the configuration of router?

Yes that's one way.

> Isn't this way unsecured? Further more, if I have thousands of client
> IP, it sounds like to me that I will have
> to add thousands of acl configuration lines to the router.

Yes.

> Or am I totally misunderstanding your suggestion?

No.

But it may be possible to do the same in the local firewall on the proxy
server instead of the router. Depends on your setup.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 11:21 +0100, Simon Waters wrote:

> Apache will do this as a reverse proxy, but Apache as a reverse proxy is 
> interesting - most places I've seen it done it is sitting on the webserver on 
> port 80 forwarding to less capable webservers on another port. Although 
> Apache can do quite a lot as a reverse proxy the first person I saw who tried 
> to set it up created an open proxy - so be careful.

Also mod_deflate still works very sub-otimal when it comes to HTTP &
caching. Currently cache validations is a bit broken, after it was fixed
to at least minimally comply with HTTP specifications.

There is an open task in the Apache project for supporting mod_deflate
and similar filters that conditionally modify the response entity and
thereby creating new variants of the requested resource. HTTP isn't
really designed for this and getting it right requires some care.. (ETag
needs to be remapped, in a way that If-* conditional requests still to
the right thing).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Re: Unable to match empty user-agent strings?

2008-10-20 Thread James Cohen
After some further testing and looking closely at the request headers
it turns out that this is failing because the User-Agent header field
isn't present (rather than it being present but empty).

Here's my workaround/solution which seems to work nicely.

acl image_leechers browser ^$
acl image_leechers browser Wget

acl has_user_agent browser ^.+$


http_access deny !has_user_agent
http_access deny image_leechers


I promise not to make a habit of just conversing with myself on this list...

2008/10/20 James Cohen <[EMAIL PROTECTED]>:
> Hi,
>
> I think I've found a bug but first wanted to double-check I wasn't
> doing anything dumb.
>
> In our reverse proxy setup we want to block people from "leeching" the
> images using Wget or similar applications. To do this we want to block
> user agents that match "Wget" and because lots of people use CURL or
> their own home-brew clients anything with an empty user agent string.
>
> I added the following acl rule:
>
> # Block automated processes from requesting our images
> acl image_leechers browser ^$
> acl image_leechers browser Wget
>
> and later on...
>
> http_access deny image_leechers
>
> Requests that contain Wget are being blocked exactly as expected by
> the proxy. Empty requests are still going through to the parent
> server:
>
>
> Request with Wget in the user agent request headers (correct behaviour)
>
> $ wget  -S http://images.xxx.com/preview/1134/35121981.jpg
> --11:29:45--  http://images.xxx.com/preview/1134/35121981.jpg
>   => `35121981.jpg'
> Resolving images.xxx.com... 62.216.237.30
> Connecting to images.xxx.com|62.216.237.30|:80... connected.
> HTTP request sent, awaiting response...
>  HTTP/1.0 403 Forbidden
>  Server: squid/3.0.STABLE9
>  Mime-Version: 1.0
>  Date: Mon, 20 Oct 2008 10:29:45 GMT
>  Content-Type: text/html
>  Content-Length: 1653
>  Expires: Mon, 20 Oct 2008 10:29:45 GMT
>  X-Squid-Error: ERR_ACCESS_DENIED 0
>  X-Cache: MISS from ws2
>  Via: 1.0 ws2 (squid/3.0.STABLE9)
>  Connection: close
> 11:29:45 ERROR 403: Forbidden.
>
> And a similar request with an empty user agent string (incorrect - the
> request is being passed back to the parent where it returns a 403)
>
> $ wget -U "" -S http://images.xxx.com/preview/1134/james.jpg
> --11:30:09--  http://images.xxx.com/preview/1134/james.jpg
>   => `james.jpg'
> Resolving images.xxx.com... 62.216.237.30
> Connecting to images.xxx.com|62.216.237.30|:80... connected.
> HTTP request sent, awaiting response...
>  HTTP/1.0 403 Forbidden
>  Content-Type: text/html
>  Content-Length: 345
>  Date: Mon, 20 Oct 2008 10:30:09 GMT
>  Server: lighttpd/1.4.20
>  X-Cache: MISS from ws2
>  Via: 1.0 ws2 (squid/3.0.STABLE9)
>  Connection: close
> 11:30:09 ERROR 403: Forbidden.
>
>
> Thanks,
>
> James
>


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Amos Jeffries

Aviral Pandey wrote:
Thanks Henrik...But I have seen vice-versa to be working i.e., server 
sending a gzipped response and squid serving deflated one when client 
asks for deflated content


Are you sure? squid does _transfer_ chunked decoding. But thats special 
compression only 'zipped' between the web server and Squid (and web 
server is broken to be trying it with Squid anyway).


...or perhapse you have a version of that gzip patch applied. It's not new.

... or maybe you hit the Vary: bug where squid always presented the 
deflate copy no matter what once it received the first deflate copy. :(


Amos




My squid is 2.6-16.


Thanks,
Avi
http://aviral.piworx.com

Henrik Nordstrom wrote:

On mån, 2008-10-20 at 15:23 +0530, Aviral Pandey wrote:
 
When my client asks for Accept-Encoding: gzip, deflate, squid is 
sending it raw content. Shouldn't it gzip and then send?



No, Content-Encoding (just as Content-Languate) is a task of webservers,
not semantically transparent proxies such as Squid.

 

Is there a way in which this can be achieved?



There is an addon for squid-3, but it apparently needs a bit of work to
apply to current Squid-3 sources.

Regards
Henrik
  





--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Unable to match empty user-agent strings?

2008-10-20 Thread Amos Jeffries

James Cohen wrote:

Hi,

I think I've found a bug but first wanted to double-check I wasn't
doing anything dumb.

In our reverse proxy setup we want to block people from "leeching" the
images using Wget or similar applications. To do this we want to block
user agents that match "Wget" and because lots of people use CURL or
their own home-brew clients anything with an empty user agent string.

I added the following acl rule:

# Block automated processes from requesting our images
acl image_leechers browser ^$
acl image_leechers browser Wget

and later on...

http_access deny image_leechers

Requests that contain Wget are being blocked exactly as expected by
the proxy. Empty requests are still going through to the parent
server:


It's not so much an empty string. As a completely missing header.
Squid can only test what it has against what it checks.  If you get my 
meaning.


I haven't tested it, but you might have better luck if you invert the 
test to allow access to okay agents and deny the rest.


All they have to do is send -U "fu" and they get past the wget blocker.
Not to mention the real browser UA are commonly known and often 
recommended for script kiddies to spoof the IE agent to get past site 
barriers and brokenness in one action.


Amos



Request with Wget in the user agent request headers (correct behaviour)

$ wget  -S http://images.xxx.com/preview/1134/35121981.jpg
--11:29:45--  http://images.xxx.com/preview/1134/35121981.jpg
   => `35121981.jpg'
Resolving images.xxx.com... 62.216.237.30
Connecting to images.xxx.com|62.216.237.30|:80... connected.
HTTP request sent, awaiting response...
  HTTP/1.0 403 Forbidden
  Server: squid/3.0.STABLE9
  Mime-Version: 1.0
  Date: Mon, 20 Oct 2008 10:29:45 GMT
  Content-Type: text/html
  Content-Length: 1653
  Expires: Mon, 20 Oct 2008 10:29:45 GMT
  X-Squid-Error: ERR_ACCESS_DENIED 0
  X-Cache: MISS from ws2
  Via: 1.0 ws2 (squid/3.0.STABLE9)
  Connection: close
11:29:45 ERROR 403: Forbidden.

And a similar request with an empty user agent string (incorrect - the
request is being passed back to the parent where it returns a 403)

$ wget -U "" -S http://images.xxx.com/preview/1134/james.jpg
--11:30:09--  http://images.xxx.com/preview/1134/james.jpg
   => `james.jpg'
Resolving images.xxx.com... 62.216.237.30
Connecting to images.xxx.com|62.216.237.30|:80... connected.
HTTP request sent, awaiting response...
  HTTP/1.0 403 Forbidden
  Content-Type: text/html
  Content-Length: 345
  Date: Mon, 20 Oct 2008 10:30:09 GMT
  Server: lighttpd/1.4.20
  X-Cache: MISS from ws2
  Via: 1.0 ws2 (squid/3.0.STABLE9)
  Connection: close
11:30:09 ERROR 403: Forbidden.


Thanks,

James



--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Unable to match empty user-agent strings?

2008-10-20 Thread James Cohen
2008/10/20 Amos Jeffries <[EMAIL PROTECTED]>:
>
> It's not so much an empty string. As a completely missing header.
> Squid can only test what it has against what it checks.  If you get my
> meaning.
>
> I haven't tested it, but you might have better luck if you invert the test
> to allow access to okay agents and deny the rest.
>
> All they have to do is send -U "fu" and they get past the wget blocker.
> Not to mention the real browser UA are commonly known and often recommended
> for script kiddies to spoof the IE agent to get past site barriers and
> brokenness in one action.
>
> Amos
>

Thanks Amos,

I figured that out just after I'd posted my original mail.

I appreciate that the blocking is pretty weak but it seems that the
majority of the unwanted traffic is some kind of automated client not
supplying any User Agent at all.

I guess we going for the "low hanging fruit", anyone who really wants
the content will be able to fetch it (by spoofing as a real user
agent) but this should way to block a bunch of it.

James


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Aviral Pandey



What webserver is it that doesn't support compression?

  
Its a custom webserver, a very simple one which does file serving and 
has no application built upon it. I am using squid as a reverse proxy here.


[squid-users] Unable to match empty user-agent strings?

2008-10-20 Thread James Cohen
Hi,

I think I've found a bug but first wanted to double-check I wasn't
doing anything dumb.

In our reverse proxy setup we want to block people from "leeching" the
images using Wget or similar applications. To do this we want to block
user agents that match "Wget" and because lots of people use CURL or
their own home-brew clients anything with an empty user agent string.

I added the following acl rule:

# Block automated processes from requesting our images
acl image_leechers browser ^$
acl image_leechers browser Wget

and later on...

http_access deny image_leechers

Requests that contain Wget are being blocked exactly as expected by
the proxy. Empty requests are still going through to the parent
server:


Request with Wget in the user agent request headers (correct behaviour)

$ wget  -S http://images.xxx.com/preview/1134/35121981.jpg
--11:29:45--  http://images.xxx.com/preview/1134/35121981.jpg
   => `35121981.jpg'
Resolving images.xxx.com... 62.216.237.30
Connecting to images.xxx.com|62.216.237.30|:80... connected.
HTTP request sent, awaiting response...
  HTTP/1.0 403 Forbidden
  Server: squid/3.0.STABLE9
  Mime-Version: 1.0
  Date: Mon, 20 Oct 2008 10:29:45 GMT
  Content-Type: text/html
  Content-Length: 1653
  Expires: Mon, 20 Oct 2008 10:29:45 GMT
  X-Squid-Error: ERR_ACCESS_DENIED 0
  X-Cache: MISS from ws2
  Via: 1.0 ws2 (squid/3.0.STABLE9)
  Connection: close
11:29:45 ERROR 403: Forbidden.

And a similar request with an empty user agent string (incorrect - the
request is being passed back to the parent where it returns a 403)

$ wget -U "" -S http://images.xxx.com/preview/1134/james.jpg
--11:30:09--  http://images.xxx.com/preview/1134/james.jpg
   => `james.jpg'
Resolving images.xxx.com... 62.216.237.30
Connecting to images.xxx.com|62.216.237.30|:80... connected.
HTTP request sent, awaiting response...
  HTTP/1.0 403 Forbidden
  Content-Type: text/html
  Content-Length: 345
  Date: Mon, 20 Oct 2008 10:30:09 GMT
  Server: lighttpd/1.4.20
  X-Cache: MISS from ws2
  Via: 1.0 ws2 (squid/3.0.STABLE9)
  Connection: close
11:30:09 ERROR 403: Forbidden.


Thanks,

James


Re: [squid-users] Strange entries in cache.log (3.0.STABLE10)

2008-10-20 Thread Malte Schröder
On Mon, 20 Oct 2008 12:17:12 +0200
Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

> On mån, 2008-10-20 at 11:01 +0200, Malte Schröder wrote:
> Have you inspected the ICAP responses from imageFilter, and paired this
> with the error? I suspect the error may be from there.

No I had not done that. But I just did and I found the broken headers
in the response of the imagefilter. So no problem in Squid.

How could I ever have doubts in Squid .. ;)
Thanks for the hint.

> 
> Regards
> Henrik


RE: [squid-users] Authentication Issue with Squid and mixed BASIC/NTLM auth

2008-10-20 Thread Chris Natter
This has all been on a test machine that has access to OWA and the
internal network, nothing production was touched :) I work with a very
large network so I have a lot of resources to at my disposal. 

The config (it's been sanitized for confidentiality reasons): 

https_port 10.212.134.117:443 vhost protocol=http
cert=/etc/squid/certs/mail.bar.com.san.pem
https_port 10.212.134.119:443 vhost defaultsite=mail.foo.com
protocol=http cert=/etc/squid/certs/foo.pem
https_port 10.212.134.121:443 vhost protocol=http
cert=/etc/squid/certs/domain.domain.com.san.pem
https_port 10.212.134.123:443 vhost defaultsite=mail.domain2.com
protocol=http cert=/etc/squid/certs/mail.domain2.com.pem
https_port 10.212.134.123:8000 vhost defaultsite=domain3.domain3.com
protocol=http cert=/etc/squid/certs/domain3.domain.pem
https_port 10.212.134.125:443 vhost defaultsite=owa.domain4.com
protocol=http cert=/etc/squid/certs/owa.domain4.pem
cache_peer 10.212.133.52 parent 80 0 proxy-only no-query no-digest
originserver front-end-https=on login=PASS
cache_peer 10.212.133.102 parent 80 0 proxy-only no-query no-digest
originserver front-end-https=on login=PASS 
cache_peer 10.212.133.131 parent 80 0 proxy-only no-query no-digest
originserver front-end-https=on login=PASS 
cache_peer 10.212.133.56 parent 80 0 proxy-only no-query no-digest
originserver front-end-https=on login=PASS
cache_peer 192.168.12.44 parent 80 0 proxy-only no-query no-digest
originserver front-end-https=on login=PASS
cache_peer_domain 10.212.133.52 mail.bar.com
cache_peer_domain 10.212.133.102 mail.foo.com
cache_peer_domain 10.212.133.56 domain1.domain1.com domain2.domain2.com
domain3.domain3.com
cache_peer_domain 10.212.133.131 mail.domain5.com
autodiscover.domain5.com
cache_peer_domain 192.168.12.44 owa.domain4.com
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl owa_access myport 80 443 8000
http_access allow all owa_access
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
coredump_dir /var/spool/squid
extension_methods RPC_IN_DATA RPC_OUT_DATA

The fact that NTLM is supposed to always require domain is strange
indeed. I can confirm that it is negotiating NTLM too, we sync
ActiveSync phones off of the proxy, and they require NTLM for auth.

-Chris

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 17, 2008 10:31 PM
To: Chris Natter
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Authentication Issue with Squid and mixed
BASIC/NTLM auth


Um, something weird is going on. I'm a little scared by the double sets 
of bad news.
Can you confirm that your in-use systems are okay. I haven't led you to 
a point where anything serious is broken? (ie this is all isolated on a 
test machine where its okay to break?)


Chris Natter wrote:
> Hmmm, strange. I tested 2.7STABLE4, but it doesn't seem to be
stripping
> the DOMAIN, it will still accept only DOMAIN\USERNAME. Perhaps I'm
> missing something?

I've looked at it closer. And the patches which I saw earlier were for a

slightly different helper (mapping NTLM front-end auth to LDAP backend)

Henrik informs me that NTLM always needs the domain. Which makes me 
wonder why you didn't in 3.0.

> 
> I also tested squid-3.1-20081016, built with a spec file adopted from
a
> squid3.0STABLE7 Redhat package:
> 
> configure \
>--exec_prefix=/usr \
>--bindir=%{_sbindir} \
>--libexecdir=%{_libdir}/squid \
>--localstatedir=/var \
>--datadir=%{_datadir} \
>--sysconfdir=/etc/squid \
>--disable-dependency-tracking \
>--enable-arp-acl \
>--enable-auth="basic,digest,ntlm,negotiate" \
>  
>
--enable-basic-auth-helpers="LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-do
> main-NTLM,SASL" \
>--enable-cache-digests \
>--enable-cachemgr-hostname=localhost \
>--enable-delay-pools \
>--enable-digest-auth-helpers="password" \
>--enable-epoll \
>  
>
--enable-external-acl-

[squid-users] Secondary Cache

2008-10-20 Thread Monah Baki
Hi All,

Can I have my squid cache be a secondary cache to a bluecoat server?


Thanks


Re: [squid-users] Secondary Cache

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 19:57 +0300, Monah Baki wrote:

> Can I have my squid cache be a secondary cache to a bluecoat server?

Yes.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Objects Release from Cache Earlier Than Expected

2008-10-20 Thread BUI18
Hi -

I have been trying to track down an issue with Squid 2.6 STABLE18 and why users 
were getting TCP_REFRESH_MISS instead of TCP_REFRESH_HIT on files that were 
recently cached.  We first noticed that users were getting misses when we 
expected them to receive hits.

I have set the min and max age to be 5 and 7 days respectively.  When I look in 
the store.log file, I do see objects which were known to have been cached today 
(base on time/date stamp in the file name), yet they have status code of 
RELEASE.  

The cache_dir (1 TB) on this system is only 25% full.  The low watermark is set 
at 95% with high at 97%.

Does any one have any ideas on why Squid would appear to purge the object 
earlier than expected?

Thanks in advance.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-20 Thread BUI18
Hi -

Here's some additional info I noticed from the store.log.

1224524455.351
SWAPOUT 00 0003A6CB 7377CBD1A7584A5D7C7FD06B5B827595  200 1224524431
1224522501-1 video/jpeg 1337100/1337100 GET 
http://ftp.mydomain.com/myserver/websites/data/MyVideoFile1020130441180.vid
1224544851.517
RELEASE 00 0003A6CB CD5B96F66CC94483D586D7E67A76A94C  200 1224524431
1224522501-1 video/jpeg 1337100/-279 GET 
http://ftp.mydomain.com/myserver/websites/data/MyVideoFile1020130441180.vid
1224544862.563
SWAPOUT 00 0003CA26 7377CBD1A7584A5D7C7FD06B5B827595  200 1224544840
1224522501-1 video/jpeg 1337100/1337100 GET 
http://ftp.mydomain.com/myserver/websites/data/MyVideoFile1020130441180.vid

Trace breaks down as follows.
1)  File was first pre-fetched by wget program.
2)  File was released by cache.
3)  File was re-fetch when a user tried to access it.

The
thing that stuck out is that in the RELEASE line of the log, the
real-length is a negative number (-279).  What does this mean exactly?


- Original Message 
From: BUI18 <[EMAIL PROTECTED]>
To: squid-users@squid-cache.org
Sent: Monday, October 20, 2008 4:02:52 PM
Subject: [squid-users] Objects Release from Cache Earlier Than Expected

Hi -

I have been trying to track down an issue with Squid 2.6 STABLE18 and why users 
were getting TCP_REFRESH_MISS instead of TCP_REFRESH_HIT on files that were 
recently cached.  We first noticed that users were getting misses when we 
expected them to receive hits.

I have set the min and max age to be 5 and 7 days respectively.  When I look in 
the store.log file, I do see objects which were known to have been cached today 
(base on time/date stamp in the file name), yet they have status code of 
RELEASE.  

The cache_dir (1 TB) on this system is only 25% full.  The low watermark is set 
at 95% with high at 97%.

Does any one have any ideas on why Squid would appear to purge the object 
earlier than expected?

Thanks in advance.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 16:02 -0700, BUI18 wrote:
> Hi -
> 
> I have been trying to track down an issue with Squid 2.6 STABLE18 and
> why users were getting TCP_REFRESH_MISS instead of TCP_REFRESH_HIT on
> files that were recently cached.  We first noticed that users were
> getting misses when we expected them to receive hits.

TCP_REFRESH_MISS is a cache validation which indicated the object has
been updated on the origin server.

> I have set the min and max age to be 5 and 7 days respectively.  When
> I look in the store.log file, I do see objects which were known to
> have been cached today (base on time/date stamp in the file name), yet
> they have status code of RELEASE.  

And you are sure it wasn't simply replaced with a newer copy of the same
URL?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-20 Thread BUI18
I not sure what you mean by a newer copy of the same URL?  Can you elaborate on 
that a bit?

As far as I know, the aspx pages displays a list of buttons for each video 
file.  When the user clicks on the button, it references the URL.

I've seen it where the user click the link and gets a TCP_REFRESH_HIT, but if I 
come back a day later (well within my min/max settings), I get a 
TCP_REFRESH_MISS.

I also previously posted additional info from the store.log.  Which shows the 
object being cached and then released after a short time.





- Original Message 
From: Henrik Nordstrom <[EMAIL PROTECTED]>
To: BUI18 <[EMAIL PROTECTED]>
Cc: squid-users@squid-cache.org
Sent: Monday, October 20, 2008 4:55:41 PM
Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected

On mån, 2008-10-20 at 16:02 -0700, BUI18 wrote:
> Hi -
> 
> I have been trying to track down an issue with Squid 2.6 STABLE18 and
> why users were getting TCP_REFRESH_MISS instead of TCP_REFRESH_HIT on
> files that were recently cached.  We first noticed that users were
> getting misses when we expected them to receive hits.

TCP_REFRESH_MISS is a cache validation which indicated the object has
been updated on the origin server.

> I have set the min and max age to be 5 and 7 days respectively.  When
> I look in the store.log file, I do see objects which were known to
> have been cached today (base on time/date stamp in the file name), yet
> they have status code of RELEASE.  

And you are sure it wasn't simply replaced with a newer copy of the same
URL?

Regards
Henrik


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com


RE: [squid-users] Authentication Issue with Squid and mixed BASIC/NTLM auth

2008-10-20 Thread Amos Jeffries
> This has all been on a test machine that has access to OWA and the
> internal network, nothing production was touched :) I work with a very
> large network so I have a lot of resources to at my disposal.
>

Doh, here I was still thinking along the lines of you doing auth in squid.
Now I've got me head straight. For the config below the only thing that I
think matters to auth in usage is the passthru behavior.
OWA itself must kick off a domain controller authentication with the
client which just funnels through squid on its way back to OWA.

Which leaves me stumped as to why the behavior change happened.

> The config (it's been sanitized for confidentiality reasons):
>
> https_port 10.212.134.117:443 vhost protocol=http
> cert=/etc/squid/certs/mail.bar.com.san.pem

Did you elide the defaultsite= parameter from here?

> https_port 10.212.134.119:443 vhost defaultsite=mail.foo.com
> protocol=http cert=/etc/squid/certs/foo.pem
> https_port 10.212.134.121:443 vhost protocol=http
> cert=/etc/squid/certs/domain.domain.com.san.pem
> https_port 10.212.134.123:443 vhost defaultsite=mail.domain2.com
> protocol=http cert=/etc/squid/certs/mail.domain2.com.pem
> https_port 10.212.134.123:8000 vhost defaultsite=domain3.domain3.com
> protocol=http cert=/etc/squid/certs/domain3.domain.pem
> https_port 10.212.134.125:443 vhost defaultsite=owa.domain4.com
> protocol=http cert=/etc/squid/certs/owa.domain4.pem

> cache_peer 10.212.133.52 parent 80 0 proxy-only no-query no-digest
> originserver front-end-https=on login=PASS
> cache_peer 10.212.133.102 parent 80 0 proxy-only no-query no-digest
> originserver front-end-https=on login=PASS
> cache_peer 10.212.133.131 parent 80 0 proxy-only no-query no-digest
> originserver front-end-https=on login=PASS
> cache_peer 10.212.133.56 parent 80 0 proxy-only no-query no-digest
> originserver front-end-https=on login=PASS
> cache_peer 192.168.12.44 parent 80 0 proxy-only no-query no-digest
> originserver front-end-https=on login=PASS

> cache_peer_domain 10.212.133.52 mail.bar.com
> cache_peer_domain 10.212.133.102 mail.foo.com
> cache_peer_domain 10.212.133.56 domain1.domain1.com domain2.domain2.com
> domain3.domain3.com
> cache_peer_domain 10.212.133.131 mail.domain5.com
> autodiscover.domain5.com
> cache_peer_domain 192.168.12.44 owa.domain4.com

> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> access_log /var/log/squid/access.log squid
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320

> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT

> acl owa_access myport 80 443 8000
> http_access allow all owa_access

> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost
> http_access deny all
> http_reply_access allow all
> icp_access allow all
> coredump_dir /var/spool/squid
> extension_methods RPC_IN_DATA RPC_OUT_DATA
>
> The fact that NTLM is supposed to always require domain is strange
> indeed. I can confirm that it is negotiating NTLM too, we sync
> ActiveSync phones off of the proxy, and they require NTLM for auth.

'needs' is relative to the end authenticator. It can be supplied by any of
the intermediaries by adding to the headers. Small sites often set it in
samba and default it. Various of the helpers is squid do the same thing,
replacing whats given. But you are not using those, which confused me
earlier thinking you were.

> -Chris
>
> -Original Message-
> From: Amos Jeffries [mailto:[EMAIL PROTECTED]
> Sent: Friday, October 17, 2008 10:31 PM
> To: Chris Natter
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] Authentication Issue with Squid and mixed
> BASIC/NTLM auth
>
>
> Um, something weird is going on. I'm a little scared by the double sets
> of bad news.
> Can you confirm that your in-use systems are okay. I haven't led you to
> a point where anything serious is broken? (ie this is all isolated on a
> test machine where its okay to break?)
>
>
> Chris Natter wrote:
>> Hmmm, strange. I tested 2.7STABLE4, but it doesn't seem to be
> stripping
>> the DOMAIN, it will still accept only DOMAIN\USERNAME. Perhaps I'm
>> missing something?
>

[squid-users] can I use Squid as a proxy of streaming protocol?

2008-10-20 Thread keisuke.hamanaka
Hello, I have a question.

Can I use Squid as a proxy of Streaming protocol such as WMV,RealMedia and 
QuickTime?

Can I use Squid as a proxy of Instant Messsage such as AOL,Yahoo and MSN?

I'm very sorry that my English is so poor.


Re: [squid-users] can I use Squid as a proxy of streaming protocol?

2008-10-20 Thread Tim Bates

[EMAIL PROTECTED] wrote:

Can I use Squid as a proxy of Streaming protocol such as WMV,RealMedia and 
QuickTime?
Can I use Squid as a proxy of Instant Messsage such as AOL,Yahoo and MSN?
Squid is an HTTP proxy, so anything that can use HTTP proxies can go 
through it.


MSN Messenger and ICQ can for example. WMV files streamed via HTTP 
should, but won't if they are coming over any other protocol. Not sure 
about the rest on your list.


TB

**
This message is intended for the addressee named and may contain
privileged information or confidential information or both. If you
are not the intended recipient please delete it and notify the sender.
**