[squid-users] Dedicate Bandwidth to IP Address

2011-02-22 Thread Edmonds Namasenda
Dear all.

I would like to have a video conference call on my LAN using a
particular I.P Address. This is going to be for a limited time and I
want a clear connection.
We are already running Squid in transparent proxy mode with some ACLs
limiting HTTP access, downloads, streaming to a particular group of
I.P Addresses. However the I.P Address I want to use is among the
admin addresses with open (unrestricted) access to anything.

How can I allocate 512K of my bandwidth to that particular I.P Address
for a test call? I can then adjust (increase or decrease) the
bandwidth to test the effects.

--
Thank you and kind regards,

I.P.N Edmonds
ICT Practitioner & Consultant


Re: [squid-users] Squid with three isp

2011-02-22 Thread Senthilkumar

Amos Jeffries wrote:

On Wed, 23 Feb 2011 10:41:20 +0530, Senthilkumar wrote:

Hello All,

We have a gateway machine which has three upstream isp and to make
clients use the particular isp we use advance routing based on the
source address.  When we run squid  on the same machine to log
websites all traffic passes through the single isp i.e. the isp which
is set as default gateway. We need all the users to pass through squid
and use different isp's as per source route.
Please share your views to achieve it.


http://wiki.squid-cache.org/ConfigExamples/Strange/RotatingIPs

Amos


Thank you  very much Amos,

We have clients in 10.X.X and 172.16.1.X and 172.16.2.X series, we need 
each client series would use single isp for upload and download.

whether the following configuration achieve it?

acl isp1 src 10.X.X
acl isp2 src 172.16.1.X
acl isp3 src 172,16.2.X

acl download method GET HEAD
acl upload method POST PUT

tcp_outgoing_address  isp1 download upload
tcp_outgoing_address  isp1 download upload
tcp_outgoing_address  isp1 download upload

whether by setting tcp_outgoing_address whether upload and download 
takes place through the source based routed isp or the default gateway isp?


Thank
Senthil



[squid-users] Squid appliance?

2011-02-22 Thread Eliezer

Thanks amos.


was very helpful


well if you do ask me i think i know the reasons cause i have seen the 
traffic logs
at my work place (some ISP) and some rules sets that people published on 
the net.



also i wanted to ask about the squid-appliance plan development.


im not really a developer but it seems like a basic installation script 
can be done very easily to configure or\and install proxy with

on and off triggers or basic selection.


also i have seen that Turnkey-linux has a nice "patch" for their core 
appliance to install cache and filtering using squid3,

changed easily to other squid versions.


http://www.turnkeylinux.org/forum/general/20100920/tklpatch-web-filter-proxy


a nice thing they have is the TLK config menu based on perl i think that 
can be configured to match squid installation\config tool.



their core is 110-150 MB installation footprint gets updates and other 
stuff so it seems nice as a candidate.



the only thing i have seen is that my debian as a cache server is using 
less cpu and less ram.



I was thinking of taking the time and to try to work on  a basic 
installation and if i will see that i am managing to make it more than 
just installation.



Regards Eliezer




On 23/02/2011 07:56, Amos Jeffries wrote:


On Wed, 23 Feb 2011 07:03:02 +0200, Eliezer wrote:

i have seen refresh_pattern  with Age percentage more then 100% and
my question was:


does that percentage does an extending to the expiration time?


or squid has maximum of 100% limit?


No. Limit is 1 year. So if % of past age is over 1 year it will be 
cropped back to that 1 year max.





i have seen people writing unreasonable and ridiculously patterns for
cache like:

refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?)
5259487 999% 5259487 override-expire ignore-reload reload-into-ims
ignore-no-cache ignore-private

it means "save the file for 3652(days) = 5259487(minutes)/60/24" am i 
right?


Yes they are ridiculous.

Not for the reasons you seem to think.

For an object 10 sec old when Squid received it that % would keep it 
in cache and trigger a when it reached 100 seconds old (10sec * 9.99 
rounded to 1 second).


The max-valud caps this %  but 100 seconds is less than N days, so 
nothing happens there.


The min-value then kicks in raises that to "minimum 3652days". Which 
is ridiculously long period to go *without validation*.


To get a properly fresh content large % and/.or max-value are 
reasonable but such high min-value is usually not a good thing. 
Definitely not a good thing to do without deep analysis of the 
websites the pattern catches.





no harming anyone but it seems kind of weird.


It is harming their clients view of the websites which match that 
refresh_pattern regex. Particularly when those ignore-* and override-* 
are used as well.


In the case given it is a youtube (with youtube clone sites as 
collateral damage) and a lot of very deep analysis has been done to 
ensure that the patterns for those videos does no damage to the user 
experience. Quite the opposite. Our adoption and publication of those 
rules was a last-resort after a year of discussion attempting to get 
youtube to present cache friendly controls on their site fell through.


Amos


Re: [squid-users] Squid with three isp

2011-02-22 Thread Amos Jeffries

On Wed, 23 Feb 2011 10:41:20 +0530, Senthilkumar wrote:

Hello All,

We have a gateway machine which has three upstream isp and to make
clients use the particular isp we use advance routing based on the
source address.  When we run squid  on the same machine to log
websites all traffic passes through the single isp i.e. the isp which
is set as default gateway. We need all the users to pass through 
squid

and use different isp's as per source route.
Please share your views to achieve it.


http://wiki.squid-cache.org/ConfigExamples/Strange/RotatingIPs

Amos


Re: [squid-users] i was wondering about the refresh_pattern Age percentage more then 100%

2011-02-22 Thread Amos Jeffries

On Wed, 23 Feb 2011 07:03:02 +0200, Eliezer wrote:

i have seen refresh_pattern  with Age percentage more then 100% and
my question was:


does that percentage does an extending to the expiration time?


or squid has maximum of 100% limit?


No. Limit is 1 year. So if % of past age is over 1 year it will be 
cropped back to that 1 year max.





i have seen people writing unreasonable and ridiculously patterns for
cache like:

refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?)
5259487 999% 5259487 override-expire ignore-reload reload-into-ims
ignore-no-cache ignore-private

it means "save the file for 3652(days) = 5259487(minutes)/60/24" am i 
right?


Yes they are ridiculous.

Not for the reasons you seem to think.

For an object 10 sec old when Squid received it that % would keep it in 
cache and trigger a when it reached 100 seconds old (10sec * 9.99 
rounded to 1 second).


The max-valud caps this %  but 100 seconds is less than N days, so 
nothing happens there.


The min-value then kicks in raises that to "minimum 3652days". Which is 
ridiculously long period to go *without validation*.


To get a properly fresh content large % and/.or max-value are 
reasonable but such high min-value is usually not a good thing. 
Definitely not a good thing to do without deep analysis of the websites 
the pattern catches.





no harming anyone but it seems kind of weird.


It is harming their clients view of the websites which match that 
refresh_pattern regex. Particularly when those ignore-* and override-* 
are used as well.


In the case given it is a youtube (with youtube clone sites as 
collateral damage) and a lot of very deep analysis has been done to 
ensure that the patterns for those videos does no damage to the user 
experience. Quite the opposite. Our adoption and publication of those 
rules was a last-resort after a year of discussion attempting to get 
youtube to present cache friendly controls on their site fell through.


Amos


[squid-users] Squid with three isp

2011-02-22 Thread Senthilkumar

Hello All,

We have a gateway machine which has three upstream isp and to make 
clients use the particular isp we use advance routing based on the 
source address.  When we run squid  on the same machine to log websites 
all traffic passes through the single isp i.e. the isp which is set as 
default gateway. We need all the users to pass through squid and use 
different isp's as per source route.

Please share your views to achieve it.

Thanks
Senthil.



[squid-users] i was wondering about the refresh_pattern Age percentage more then 100%

2011-02-22 Thread Eliezer
i have seen refresh_pattern  with Age percentage more then 100% and my 
question was:



does that percentage does an extending to the expiration time?


or squid has maximum of 100% limit?


i have seen people writing unreasonable and ridiculously patterns for 
cache like:


refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?) 5259487 999% 
5259487 override-expire ignore-reload reload-into-ims ignore-no-cache 
ignore-private

it means "save the file for 3652(days) = 5259487(minutes)/60/24" am i right?

no harming anyone but it seems kind of weird.

i understand that there are situations that this is needed.

Thanks Eliezer






Re: [squid-users] me.com TCP_MISS/503

2011-02-22 Thread Amos Jeffries

On Tue, 22 Feb 2011 07:37:27 -0800 (PST), nickcx wrote:

Hi List,

I'm trying to get access to me.com working on my test proxy, but I 
keep
getting a timeout in my browsers: (110) Connection timed out. Access 
log
shows TCP_MISS/503. I have tried disabling various things to see if I 
can
get it working: authentication, send direct – even allow all at the 
top but

no joy.

On 3stable20 I've had this working ok..

Any help/pointers gratefully received,



"Connection timed out" usually means the network connectivity is broken 
or lagging a lot.


There are two differences between 3.0 and 3.1 in the TCP connection 
area.
 One is that 3.1 will attempt to use IPv6 when the website presents an 
 address.
 ** That particular site appears to only be presenting A from here, so 
this is unlikely. But you may be getting  so check the IPs yourself.


 The other is that 3.1 sends slightly larger packets, so things like 
Path-MTU are more important to be working correctly.


Of course, these are only relevant if the problem can be displayed on 
one version then immediately not be present on the other. If by "had 
this working" you mean last week or months ago, then there could have 
been basic Internet changes you are not aware of between you and the 
website.


Amos


Thanks
===

Squid 3.1.8 conf:





cache_store_log none squid
cache_log /var/log/squid/cache.log squid


NP: these last two log directive only take one parameter, the "squid" 
there is not needed.





# Blocks CONNECT method to IP addresses (Blocks Skype amongst other 
things)

acl StopDirectIP url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+


IPv6 is spreading. This pattern needs to be updated.

There is a new recommended pattern at 
http://wiki.squid-cache.org/ConfigExamples/Chat/Skype page.


You can omit the "443" port at the end of that wiki example to retain 
the port matching looseness of your current rule.




# MSN Messenger Allow IP ACL
acl IP_MSNMessenger src 
"/etc/squid/ACL/IPADDRESSES/IP_MSNMESSENGER.txt"




Allowing a whole machine access by IP if it uses MSN seems a bit 
excessive.
You may be interesetd in 
http://wiki.squid-cache.org/ConfigExamples/Chat/MsnMessenger
or any of the other configs at 
http://wiki.squid-cache.org/ConfigExamples/Chat





## SEND DIRECT ALLOW
always_direct allow SENDDIRECT_DstDomains
always_direct allow SENDDIRECT_IPAddresses




## CATCH ALL DENY ##
never_direct allow all
snmp_access deny all


NP: "never_direct deny all" as the only never_direct entry will combine 
with always_direct for those bypasses and cause those requests to block 
with a "Cannot forward" error.
 Since they both MUST NOT go to a peer and MUST NOT go direct to an IP 
there is no path left to choose from.


The way to implement what you appear to want is with:

  always_direct allow SENDDIRECT_DstDomains
  always_direct allow SENDDIRECT_IPAddresses

  never_direct deny SENDDIRECT_IPAddresses
  never_direct deny SENDDIRECT_DstDomains
  never_direct allow all


Amos


Re: [squid-users] Not able to apply maximum_object_size_in_memory

2011-02-22 Thread Amos Jeffries

On Tue, 22 Feb 2011 10:15:11 -0500, John Craws wrote:

Hi Amos,

On Mon, Feb 21, 2011 at 5:37 PM, Amos Jeffries wrote:






Thanks for the detail. You are right about it being in the memory 
cache.


What I expect to see with your config is that file pushed to disk, 
since it
is within the 17MB but over the 32KB. But you have no on-disk cache 
right?


You are correct. I have (intentionally) no disk cache (no cache_dir
directive). I expected the object to be discarded.



Something funky is going on with the swapout.

I think there are 2 bugs visible here, the easy one is that the 
config
parser is not detecting and warning about the global limit being 
larger than
the biggest specific limit. Second being the object not discarded 
when over

memory size and push to disk not possible.


That's what I expected also. Let me know if I can do anything to 
help.

Is it reasonable to open a bug?

Thanks,

John



It is reasonable to open a bug :)

Meanwhile setting both memory and global limits will be a workable 
workaround given that you have no disk cache.


Amos


Re: [squid-users] cache dynamically generated images

2011-02-22 Thread Amos Jeffries

On Tue, 22 Feb 2011 11:26:51 -0500, Charles Galpin wrote:

Hi Amos, thanks so much for the help. More questions and
clarification needed please

On Feb 18, 2011, at 5:47 PM, Amos Jeffries wrote:


Make sure your config has had these changes:
 http://wiki.squid-cache.org/ConfigExamples/DynamicContent

which allows Squid to play with query-string (?) objects properly.


Yes these were default settings for me.  I don't think this is
necessarily an issue for me though since I am sending URLs that look
like static image requests, but converting them via mod_rewrite in
apache to call my script.

TCP_REFRESH_MISS means the backend sent a new changed copy while 
revalidating/refreshing its existing copy.


max-age=0 means revalidate that is has not changed before sending 
anything.


>  I have set an Expires, Etag, "Cache-Control: =
max-age=3D600, s-max-age=3D600, must-revalidate", "Content-Length 
and =


must-revalidate from the server is essentially the same as max-age=0 
form the client. It will also lead to TCP_REFRESH_MISS.


I'll admit I threw in the must-revalidate as part of my incfreasingly
desperate attempts to get things behaving the way I wanted,  and
didn't fully understand it's ramifications, nor the client side
max-age=0 implications, but your explanation helps!

BUT, these controls are only what is making the problem visible. The 
server logic itself is the actual problem.


Agreed!

ETag should be the MD5 checksum of the file or something similarly 
unique. It is used alongside the URL to guarantee version differences 
are kept separate.


Yes, this was another desperate attempt to force caching to occur,
and will implement something more sane for the actual app. But this
should have helped shouldn't it? For my testing this should have
uniquely identified this image right?

I guess I have a fundamental mis-understanding, but my assumption was
all these directives were ways to tell squid to not keep asking the
origin, but server from the cache until the age expired and at that
point check if it changed. I totally didn't expect it to check every
time, and this still doesn't sit well with me. Should it really check
every time? I know a check is faster than an actual GET but it still
seems more than necessary if caching parameters have been specified.

Your approach is reasonable for your needs. But the backend server 
system is letting you down by sending back a new copy every 
validation.
If you can get it to present 304 not-modified responses between file 
update times this will work as intended.


This would mean implementing some extra logic in the script to 
handle If-Modified-Since, If-Unmodified-Since, If-None-Match and 
If-Match headers.
 The script itself needs to be in control of whether a local static 
duplicate is used, apache does not have enough info to do it as you 
noticed. Most CMS call this server-side caching.


Ok, I can return 304 and it gets  a cache hit as expected so this is
great. I am not sure I'll waste any time making my test script any
smarter as it's just a simple perl script and the actual
implementation will be in java and be able to make these
determinations, but one of the things that has been throwing me off,
is I see no signs in the apache logs of a HEAD request, they all show
up as GETs. I assume this is my mod_rewrite rule, but I also tried
with a direct url to the script and am not getting the
If-Modified-Since header for example (the only one I know off the top
of my head is set by the CGI module).


Correct. This is a RESTful property of HTTP.
HEAD is for systems to determine the properties of an object when they 
*never* want the body to come back as the reply.  Re-validation requests 
do want changed bodies to come back when relevant so they use GET with 
If-* headers.




But either way, this confirms it's just my dumb script to blame :)



Cool, good to know its easily fixed.



Lastly, I was unable to setup squid on an alternate port - say 
8081, and =
use an existing apache on port 80, both on the same box. This is 
for =
testing so I can run squid in parallel with the existing service 
without =
changing the port it is on.  Squid seems to want to use the same 
port =

for the origin server as itself and I can't figure out how to say =
"listen in 8081 but send requests to port 80 of the origin server". 
Any =
thoughts on this? I am using another server right now to get around 
=

this, but it would be more convenient to use the same box.


cache_peer parameter #3 is the port number on the origin server to 
send HTTP requests to.


Also, to make the Host: header and URL contain the right port number 
when crossing ports like this you need to set the http_port vport=X 
option to the port the backend-server is using. Otherwise Squid will 
place its public-facing port number in the Host: header to inform the 
backend what the clients real URL was.


Yes I have this but it's still not working. Below are all uncommented
lines in my squid.conf - can you see a

Re: [squid-users] Frustrating "Invalid Request" Reply

2011-02-22 Thread Amos Jeffries

On Tue, 22 Feb 2011 17:24:39 +0200, Ümit Kablan wrote:

Hi,

2011/2/21 Amos Jeffries wrote:

On Mon, 21 Feb 2011 16:19:53 +0200, Ümit Kablan wrote:



and this works fine. The localnet counterpart:

---
GET



/search?hl=tr&source=hp&biw=1276&bih=823&q=eee+ktu&aq=0&aqi=g10&aql=&oq=eee&fp=64d53dfd7a69225a&tch=3&ech=1ψ=6UBOTbHmCtah_Aa2haXRDw12969740590425&wrapid=tlif129697480915821&safe=active
HTTP/1.1


Note the missing http://domain details in the URL. This is not a
browser->proxy HTTP request. It is a browsers->origin request.

IIRC interception of this type of request does not work in Windows, 
since
the kernel NAT details are not available without proprietary 
third-party

network drivers. Look at WPAD configuration of the localnet browsers
instead, that way they will send browser->proxy requests nicely.


Exactly! The working requests are all starting with http://domain/ as
you mentioned. (I must say I couldn't capture loopback network 
packets

in windows). I cant guess why firefox, ie, and chrome are sending
protocol://domain'less requests when you hit enter but send correct
url when google scans for autocompletion. I looked for advanced
options but I couldnt get anything either. Have you got an idea of a
workaround? Is it possible to tell Squid-conf to assume an exception
for a Host (e.g. www.google.com) and if encounters a
protocol://domain'less url: just concatenate two?


Squid needs to be configured via the http_port to know what mode/type 
of traffic it is going to receive. The browsers need to be sending the 
right type as well.


There are a number of workaround, So we are at the question of what 
exactly are you trying to do with the traffic? what does your goal look 
like?


Amos



Re: [squid-users] cache content age

2011-02-22 Thread Amos Jeffries

On Tue, 22 Feb 2011 22:00:42 +0800, Terry. wrote:

2011/2/22 jiluspo:
ok then content revalidated ... so the content age should be change? 
if so

then the content's header will be change then?



Age is re-counted IMO.


Yes, Age: as a header is ignored unless needed to cover missing 
timestamps.
It *should* get updated and/or added on every reply going through 
Squid, although it is optional on certain non-cached replies.


Amos



Re: [squid-users] cache dynamically generated images

2011-02-22 Thread Charles Galpin
Hi Amos, thanks so much for the help. More questions and clarification needed 
please

On Feb 18, 2011, at 5:47 PM, Amos Jeffries wrote:
> 
> Make sure your config has had these changes:
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent
> 
> which allows Squid to play with query-string (?) objects properly.

Yes these were default settings for me.  I don't think this is
necessarily an issue for me though since I am sending URLs that look
like static image requests, but converting them via mod_rewrite in
apache to call my script.

> TCP_REFRESH_MISS means the backend sent a new changed copy while
revalidating/refreshing its existing copy.
> 
> max-age=0 means revalidate that is has not changed before sending anything.
> 
>> I have set an Expires, Etag, "Cache-Control:
>> max-age=600, s-max-age=600, must-revalidate", "Content-Length

> 
> must-revalidate from the server is essentially the same as max-age=0
form the client. It will also lead to TCP_REFRESH_MISS.

I'll admit I threw in the must-revalidate as part of my increasingly
desperate attempts to get things behaving the way I wanted,  and didn't
fully understand it's ramifications, nor the client side max-age=0
implications, but your explanation helps!

> BUT, these controls are only what is making the problem visible. The
server logic itself is the actual problem.

Agreed!

> ETag should be the MD5 checksum of the file or something similarly
unique. It is used alongside the URL to guarantee version differences
are kept separate.

Yes, this was another desperate attempt to force caching to occur, and
will implement something more sane for the actual app. But this should
have helped shouldn't it? For my testing this should have uniquely
identified this image right?

I guess I have a fundamental mis-understanding, but my assumption was
all these directives were ways to tell squid to not keep asking the
origin, but server from the cache until the age expired and at that
point check if it changed. I totally didn't expect it to check every
time, and this still doesn't sit well with me. Should it really check
every time? I know a check is faster than an actual GET but it still
seems more than necessary if caching parameters have been specified.

> Your approach is reasonable for your needs. But the backend server
system is letting you down by sending back a new copy every validation.
> If you can get it to present 304 not-modified responses between file
update times this will work as intended.
> 
> This would mean implementing some extra logic in the script to handle
If-Modified-Since, If-Unmodified-Since, If-None-Match and If-Match
headers.
> The script itself needs to be in control of whether a local static
duplicate is used, apache does not have enough info to do it as you
noticed. Most CMS call this server-side caching.

Ok, I can return 304 and it gets  a cache hit as expected so this is
great. I am not sure I'll waste any time making my test script any
smarter as it's just a simple perl script and the actual implementation
will be in java and be able to make these determinations, but one of the
things that has been throwing me off, is I see no signs in the apache
logs of a HEAD request, they all show up as GETs. I assume this is my
mod_rewrite rule, but I also tried with a direct url to the script and
am not getting the If-Modified-Since header for example (the only one I
know off the top of my head is set by the CGI module).

But either way, this confirms it's just my dumb script to blame :)

>> 
>> Lastly, I was unable to setup squid on an alternate port - say 8081,and
>> use an existing apache on port 80, both on the same box. This is for
>> testing so I can run squid in parallel with the existing servicewithout
>> changing the port it is on.  Squid seems to want to use the same port
>> for the origin server as itself and I can't figure out how to say
>> "listen in 8081 but send requests to port 80 of the origin server".Any
>> thoughts on this? I am using another server right now to get around=
>> this, but it would be more convenient to use the same box.
> 
> cache_peer parameter #3 is the port number on the origin server to
send HTTP requests to.
> 
> Also, to make the Host: header and URL contain the right port number
when crossing ports like this you need to set the http_port vport=X
option to the port the backend-server is using. Otherwise Squid will
place its public-facing port number in the Host: header to inform the
backend what the clients real URL was.

Yes I have this but it's still not working. Below are all uncommented
lines in my squid.conf - can you see anything I have that's messing this
up? The imageserver.my.org is an apache virtual host if it matters. With
this, if I go to http://imageserver.my.org:8081/my/image/path.jpg ,
squid calls http://imageserver.my.org:8081/my/image/path.jpg instead of
http://imageserver.my.org:80/my/image/path.jpg

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl 

[squid-users] me.com TCP_MISS/503

2011-02-22 Thread nickcx

Hi List,

I'm trying to get access to me.com working on my test proxy, but I keep
getting a timeout in my browsers: (110) Connection timed out. Access log
shows TCP_MISS/503. I have tried disabling various things to see if I can
get it working: authentication, send direct – even allow all at the top but
no joy. 

On 3stable20 I've had this working ok..

Any help/pointers gratefully received, 

Thanks
===

Squid 3.1.8 conf:

http_port 8080
auth_param negotiate program /usr/lib/squid/squid_kerb_auth -r
auth_param negotiate children 120 startup=70 idle=10
auth_param negotiate keep_alive on

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60 startup=20 idle=5
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 20 startup=10 idle =2
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

authenticate_ttl 1 hour
cache_mem 1024 MB
fqdncache_size 2048
ipcache_size 2048
ipcache_low 90
ipcache_high 95
maximum_object_size_in_memory 100 KB
max_filedesc 8072

cache_peer [omitted] parent 8080 0 no-query proxy-only no-digest default

cache_mgr [omitted]
cachemgr_passwd [omitted] all
client_persistent_connections on
#server_persistent_connections on
persistent_connection_after_error on

## LOG LOCATIONS
access_log /var/log/squid/access.log squid
cache_store_log none squid
cache_log /var/log/squid/cache.log squid

## USER-AGENT (Browser-type) ACLs
acl Java_jvm browser "/etc/squid/ACL/USERAGENTS/USER-AGENTS_JAVA.txt"
acl iTunes browser "/etc/squid/ACL/USERAGENTS/USER-AGENTS_APPLE.txt"
acl MSNMessenger browser "/etc/squid/ACL/USERAGENTS/USER-AGENTS_MSN.txt"

## USER AUTHENTICATION ACLs
acl AuthenticatedUsers proxy_auth REQUIRED

## URL DESTINATION ACLs
acl URL_ALLOWDstDomains dstdom_regex
"/etc/squid/ACL/URL/URL_ALLOWDstDomains.txt"

## URL Regex
acl URL_AllowRegex url_regex -i "/etc/squid/ACL/URL/URL_ALLOWRegex.txt"

## IP ACLS ##
acl CLIENTIP src "/etc/squid/ACL/IPADDRESSES/IP_CLIENTIP.txt"

## Windows Update ACLS
acl WSUS_IP src 172.16.10.127

# LAN IP ACLs
acl 172SUBNETS src 172.16.0.0/16
acl SERVERSUBNETS src 172.16.10.0/24
acl SERVERSUBNETS src 172.16.100.0/24

# Blocks CONNECT method to IP addresses (Blocks Skype amongst other things)
acl StopDirectIP url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+

# MSN Messenger Allow IP ACL
acl IP_MSNMessenger src "/etc/squid/ACL/IPADDRESSES/IP_MSNMESSENGER.txt"

# SEND DIRECT ACLs
acl SENDDIRECT_DstDomains dstdom_regex
"/etc/squid/ACL/SENDDIRECT/SENDDIRECT_DSTDOMAINS.txt"
acl SENDDIRECT_IPAddresses src
"/etc/squid/ACL/SENDDIRECT/SENDDIRECT_IPADDRESSES.txt"

# CONNECT Method Direct IP ACLs
acl IP_CONNECTALLOW src "/etc/squid/ACL/IPADDRESSES/IP_CONNECTALLOW.txt"

## LOCALHOST ACLs
acl localhost src 127.0.0.1
acl to_localhost dst 127.0.0.0/8

## CACHEMGR ACL
acl manager proto cache_object

## PORTS ACLs
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 8080# http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl RTMP_ports port 1935# RTMP

# CONNECTION METHOD ACL
acl CONNECT method CONNECT
acl POST method POST

# ICAP SERVER #

## ICAP-specific ACLs - required to be placed before ICAP settings
acl ICAP_BYPASS dstdom_regex "/etc/squid/ACL/ICAP/ICAP_BYPASS_URL.txt"

## ICAP Settings
icap_enable on
icap_preview_enable on
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Authenticated-User
icap_client_username_encode on
icap_service ss reqmod_precache 0 icap://localhost:1344/ssreqmod
icap_service_revival_delay 60
adaptation_service_set c1 ss
adaptation_access c1 deny ICAP_BYPASS
# We don't check for auth for these either, so no point sending them to ICAP
adaptation_access c1 deny POST
#adaptation_access c1 deny CONNECT
adaptation_access c1 deny URL_ALLOWDstDomains
adaptation_access c1 deny URL_AllowRegex
adaptation_access c1 deny CLIENTIP
adaptation_access c1 deny WSUS_IP
adaptation_access c1 deny iTunes
adaptation_access c1 deny Java_jvm
# Check everything else
adaptation_access c1 allow all

## CACHEMGR ALLOW
http_access allow manager 172SUBNETS

## GLOBAL DENY RULES
http_access deny !Safe_ports
http_access deny to_localhost
http_access deny !SSL_Ports !172SUBNETS CONNECT
http_access deny !SSL_Ports !RTMP_ports !172SUBNETS POST
http_access deny 172SUBNETS !IP_MSNMESSENGER MSNMessenger
http_access deny !IP_CONNECTALLOW StopDirectIP
http_access deny !172SUBNETS iTunes
http_access deny !172SUBNETS Java_jvm

# USER 

Re: [squid-users] Frustrating "Invalid Request" Reply

2011-02-22 Thread Ümit Kablan
Hi,

2011/2/21 Amos Jeffries :
> On Mon, 21 Feb 2011 16:19:53 +0200, Ümit Kablan wrote:
>
>>
>> and this works fine. The localnet counterpart:
>>
>> ---
>> GET
>>
>>
>> /search?hl=tr&source=hp&biw=1276&bih=823&q=eee+ktu&aq=0&aqi=g10&aql=&oq=eee&fp=64d53dfd7a69225a&tch=3&ech=1ψ=6UBOTbHmCtah_Aa2haXRDw12969740590425&wrapid=tlif129697480915821&safe=active
>> HTTP/1.1
>
> Note the missing http://domain details in the URL. This is not a
> browser->proxy HTTP request. It is a browsers->origin request.
>
> IIRC interception of this type of request does not work in Windows, since
> the kernel NAT details are not available without proprietary third-party
> network drivers. Look at WPAD configuration of the localnet browsers
> instead, that way they will send browser->proxy requests nicely.

Exactly! The working requests are all starting with http://domain/ as
you mentioned. (I must say I couldn't capture loopback network packets
in windows). I cant guess why firefox, ie, and chrome are sending
protocol://domain'less requests when you hit enter but send correct
url when google scans for autocompletion. I looked for advanced
options but I couldnt get anything either. Have you got an idea of a
workaround? Is it possible to tell Squid-conf to assume an exception
for a Host (e.g. www.google.com) and if encounters a
protocol://domain'less url: just concatenate two?

>
> Amos
>

Thanks for your attention,

-- 
Ümit


Re: [squid-users] Not able to apply maximum_object_size_in_memory

2011-02-22 Thread John Craws
Hi Amos,

On Mon, Feb 21, 2011 at 5:37 PM, Amos Jeffries  wrote:
> On Mon, 21 Feb 2011 11:52:11 -0500, John Craws wrote:
>>
>> Hi,
>>
>> Thank you for the clarification. Maybe I'm just not correctly
>> interpreting whether the object is in the cache or not.
>> Here's the info you asked for, based on the config I posted
>> previously. I'm downloading a +- 16M file.
>>
>> 1. Before downloading the object:
>>
>> john.craws@jjj:~/wget$ curl -I http://172.16.199.150/popeye.mp4
>> HTTP/1.0 200 OK
>> Date: Mon, 21 Feb 2011 16:46:56 GMT
>> Server: Apache/2.2.3 (Red Hat)
>> Last-Modified: Thu, 24 Sep 2009 19:22:32 GMT
>> ETag: "e2800c-1013726-47457c0c5ae00"
>> Accept-Ranges: bytes
>> Content-Length: 16856870
>> Content-Type: video/mp4
>> X-Cache: MISS from jnk
>> Via: 1.0 jnk (squid/3.1.11)
>> Connection: keep-alive
>>
>>
>> john.craws@jjj:~/wget$ /opt/squid/bin/squidclient mgr:objects
>> HTTP/1.0 200 OK
>> Server: squid/3.1.11
>> Mime-Version: 1.0
>> Date: Mon, 21 Feb 2011 15:51:02 GMT
>> Content-Type: text/plain
>> Expires: Mon, 21 Feb 2011 15:51:02 GMT
>> Last-Modified: Mon, 21 Feb 2011 15:51:02 GMT
>> X-Cache: MISS from jnk
>> Via: 1.0 jnk (squid/3.1.11)
>> Connection: close
>>
>> (lists cached objects, no trace of the object -- normal).
>>
>> 2. Downloading the object
>>
>> john.craws@jjj:~/wget$ curl http://172.16.199.150/popeye.mp4 -o popeye.mp4
>>  % Total    % Received % Xferd  Average Speed   Time    Time
>> Time  Current
>>                                 Dload  Upload   Total   Spent    Left
>>  Speed
>> 100 16.0M  100 16.0M    0     0  2446k      0  0:00:06  0:00:06
>> --:--:-- 1634k
>>
>> 3. This time the object appears in the list
>>
>> john.craws@jjj:~/wget$ /opt/squid/bin/squidclient mgr:objects
>> HTTP/1.0 200 OK
>> Server: squid/3.1.11
>> Mime-Version: 1.0
>> Date: Mon, 21 Feb 2011 15:51:18 GMT
>> Content-Type: text/plain
>> Expires: Mon, 21 Feb 2011 15:51:18 GMT
>> Last-Modified: Mon, 21 Feb 2011 15:51:18 GMT
>> X-Cache: MISS from jnk
>> Via: 1.0 jnk (squid/3.1.11)
>> Connection: close
>>
>> (...)
>> KEY 669AB801B7640FA80E4BA73193FDAC2A
>>        STORE_OK      IN_MEMORY     SWAPOUT_NONE PING_DONE
>>        CACHABLE,DISPATCHED,VALIDATED
>>        LV:1298303469 LU:1298303469 LM:1253820152 EX:-1
>>        0 locks, 0 clients, 1 refs
>>        Swap Dir -1, File 0X
>>        GET http://172.16.199.150/popeye.mp4
>>        inmem_lo: 0
>>        inmem_hi: 16857134
>>        swapout: 0 bytes queued
>> (...)
>>
>>
>> 4. This time it's a HIT
>>
>> john.craws@jjj:~/wget$ curl -I http://172.16.199.150/popeye.mp4
>> HTTP/1.0 200 OK
>> Date: Mon, 21 Feb 2011 15:51:09 GMT
>> Server: Apache/2.2.3 (Red Hat)
>> Last-Modified: Thu, 24 Sep 2009 19:22:32 GMT
>> ETag: "e2800c-1013726-47457c0c5ae00"
>> Accept-Ranges: bytes
>> Content-Length: 16856870
>> Content-Type: video/mp4
>> Age: 16
>> X-Cache: HIT from jnk
>> Via: 1.0 jnk (squid/3.1.11)
>> Connection: keep-alive
>>
>> 5. access.log
>>
>> 1298304218.303     19 127.0.0.1 TCP_MISS/200 322 HEAD
>> http://172.16.199.150/popeye.mp4 - DIRECT/172.16.199.150 video/mp4
>> 1298304222.694      0 127.0.0.1 TCP_MEM_HIT/200 329 HEAD
>> http://172.16.199.150/popeye.mp4 - NONE/- video/mp4
>>
>> I also notice the difference the major time difference between the two
>> curl operations.
>>
>
> Thanks for the detail. You are right about it being in the memory cache.
>
> What I expect to see with your config is that file pushed to disk, since it
> is within the 17MB but over the 32KB. But you have no on-disk cache right?

You are correct. I have (intentionally) no disk cache (no cache_dir
directive). I expected the object to be discarded.

>
> Something funky is going on with the swapout.
>
> I think there are 2 bugs visible here, the easy one is that the config
> parser is not detecting and warning about the global limit being larger than
> the biggest specific limit. Second being the object not discarded when over
> memory size and push to disk not possible.

That's what I expected also. Let me know if I can do anything to help.
Is it reasonable to open a bug?

Thanks,

John

>
> Amos
>
>> Thanks!
>>
>> John
>>
>>
>> On Fri, Feb 18, 2011 at 5:56 PM, Amos Jeffries 
>> wrote:
>>>
>>> On 19/02/11 07:28, John Craws wrote:

 Hi,

 I have a squid 3.1.11 instance configured with no disk cache.
 Stripped down configuration below.




 #
 # squid.conf



 #

 shutdown_lifetime 0 seconds
 http_port 3128
 http_access allow all
 forwarded_for transparent

 acl VIDEO-CONTENT           rep_header Content-Type video/.+

 maximum_object_size_in_memory 32 KB
 maximum_object_size 17 MB
 cache_mem 4 GB
 cache allow all
 debug_options ALL,1




 #---

Re: [squid-users] cache content age

2011-02-22 Thread Terry.
2011/2/22 jiluspo :
> ok then content revalidated ... so the content age should be change? if so
> then the content's header will be change then?
>

Age is re-counted IMO.



-- 
Free SmartDNS Hosting:
http://DNSbed.com/


Re: [squid-users] cache content age

2011-02-22 Thread jiluspo
ok then content revalidated ... so the content age should be change? if so 
then the content's header will be change then?


- Original Message - 
From: "Terry." 

To: "jiluspo" 
Cc: "squid Users" 
Sent: Tuesday, February 22, 2011 5:22 PM
Subject: Re: [squid-users] cache content age



2011/2/22 jiluspo :

what happen to content when reach to above max age in refresh patterh...
would be remove from the cache? if so instead of removing can we 
revalidate

them into ims?



Squid will revalidate it from the upstream servers.



--
Free SmartDNS Hosting:
http://DNSbed.com/ 



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [squid-users] cache content age

2011-02-22 Thread Terry.
2011/2/22 jiluspo :
> what happen to content when reach to above max age in refresh patterh...
> would be remove from the cache? if so instead of removing can we revalidate
> them into ims?
>

Squid will revalidate it from the upstream servers.



-- 
Free SmartDNS Hosting:
http://DNSbed.com/


[squid-users] cache content age

2011-02-22 Thread jiluspo

what happen to content when reach to above max age in refresh patterh...
would be remove from the cache? if so instead of removing can we revalidate 
them into ims? 



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.