Re: [squid-users] does a match on an ACL stop or continue?

2012-04-05 Thread Greg Whynott

On 05/04/2012 2:09 AM, Jasper Van Der Westhuizen wrote:

Hi Greg

As far as I know it stops when it hits a rule. Rules are "AND'd "or "OR'd" 
together.



thanks Jasper!
have a great weekend,
greg



[squid-users] does a match on an ACL stop or continue?

2012-04-04 Thread Greg Whynott
If i have a list of 10 ACLs and a client matches on ACL#4,  will ACLs 
#6-10 be considered or does squid stop evaluating the ACLs and preform 
the actions defined in ACL#4?


example:

if someone in the network 10.101.10.0/24 attempts to load 
"badsite.com",  will they be denied with the ACLs below or will the TOS 
be modified and the site loaded?


acl execnetwork src 10.10.10.0/24
tcp_outgoing_tos 0x38 execnetwork

acl badsite url_regex -i badsite
http_access deny badsite


I ask as it appears to not be consistent with my current setup.

thanks for your time,
greg






[squid-users] DSCP tags on regex acl

2012-03-14 Thread Greg Whynott

Hello,

Just wanted to confirm if i am doing this properly,  as it appears to 
not be working.  Thanks very much for your time.


the intent is to tag all traffic heading to identified sites with a TOS 
tag which our internet routers will see and apply a policy route based 
on this tag.   We want to send all requests bulk video traffic to a 
particular ISP (we have multiple ISPs).


in the config I put:

acl youtube url_regex -i youtube
tcp_outgoing_tos af22 youtube


my hope was that any url with youtube in the request will tag the 
outgoing request.   but this doesn't appear to be happening as i'm not 
seeing any af22 DSCP tags at the router.



This is squid version 3.1.

take care,
greg





[squid-users] ACLs - making up a multiple match requirement. (AND like)

2011-12-01 Thread Greg Whynott


looking for guidance on creating delay pools,  something I've never done 
before and because its a production system,  I'd like to minimize my 
down time or the amount of time i'd be here if I have to come in on the 
weekend to do it.



the intent is to limit bandwidth to a list of external networks,  either 
by IP or URL regex,  to 1000kb/sec for the entire studio during work 
hours,,  _except_ for a list/group of excluded hosts inside;  which will 
have unrestricted access to the same external hosts.


i'm attempting to limit youtube bandwidth during work hours for a 
particular inside network,  whist the other inside networks have full 
bandwidth,  with squid.  At the same time,   the 'limited' network has 
full bandwidth to other non youtube sites.   it appears i'd need some 
soft of AND logic (if src IP is youtube and dest is LAN-A then..).



 I achieved this on the router using limiters/queues but its appears 
this won't work going forward,  with the new 'exclusion' requirement 
management has asked me to implement.The source or destination 
always appears to be the squid server itself from the internet router's 
perspective.  which is why i'm considering squid now.



I looked around the documents and how-tos but they all seem to use ACLs 
which reference a set value,  without exclusions.


in my perfect world,  it would look something like this..(i know this 
syntax probably doesn't exist.. just an example of how i think it would 
look if it did..)


acl youtubelimit  dstdomain .youtube.com
acl networkA youtubelimit
acl networkB !youtubelimit

where youtubelimit would be a delay pool, I guess...


I guess the short question would be,  is there a method to set up acls 
with multiple critera (an AND like ACL)?

eg:
if src ip = 74.200.40.20 and dst ip = 192.168.1.4 then use limiter.






[squid-users] squid does not work after ISP move.

2006-04-10 Thread Greg Whynott
Hello, 

please CC me on any follow ups as I no longer receive squid list emails. 
thank you very much.


I have been using squid since the beginning of time (1999) and this is 
the first show stopper I have ran into,  nice work!!


Over the weekend we changed ISPs.  the only thing changed on the network 
was the  physical ISP router the firewall was connected to,  the 
firewall was given a new IP on its external interface (ACLs remained the 
same),  the DMZ and internal hosts/network had no changes,  excluding 
DNS changes on the external DNS server,  which does not service internal 
queries.  Both the old and new Internet solution came to us over multi 
T1s (using OSPF to load share).


We tested over the weekend,  everything seemed fine.  From external 
sites we could hit all of our DMZ services,   from internally we could 
send mail,  everything else worked as expected also.  I assumed because 
we could load external web sites,  all was well and time to go home. 

Today,  Monday,   when I came into work there were several emails about 
sites not loading.  The common thing amount these sites was they seemed 
to want to POST something.  For example,  you can not log onto webmail 
servers using squirrelmail at all,  gmail allows you to log on but not 
send mail,  yahoo mail is broken in the same way,  and many other sites 
will not load if  they have forms or similar.


If I remove the proxy config from my browser,   go direct to the site,  
things work.   

Any idea?  attached is a tar.gz of the squid logs,  the production squid 
server is 2.5.STABLE8,  and I just set up and tested 2.5.STABLE13 on 
another server,  same results.  the logs are from the new proxy setup.  
Below is a part of the debug log where it looks as if things might start 
to go south  (this is after restarting squid in debug mode:: squid -k 
debug):



all the below has the same time stamp: 2006/04/10 13:08:06

comm_poll: FD 15 ready for writing
commHandleWrite: FD 15: off 0, sz 87.
commHandleWrite: write() returns 87
cbdataValid: 0x84d3340
httpSendRequestEntry: FD 15: size 87: errflag 0.
httpSendRequestEntryDone: FD 15
httpSendRequestEntryDone: No brokenPosts list
httpSendComplete: FD 15: size 0: errflag 0.
commSetTimeout: FD 15 timeout 900
cbdataUnlock: 0x84d3340
comm_poll: 1+0 FDs ready
comm_poll: FD 15 ready for reading
httpReadReply: FD 15: len -1.
httpReadReply: FD 15: read failure: (104) Connection reset by peer.
fwdFail: ERR_READ_ERROR "Bad Gateway"
   http://notes.fqdn.com/src/redirect.php
comm_close: FD 15
commCallCloseHandlers: FD 15
commCallCloseHandlers: ch->handler=0x807e350
cbdataValid: 0x84d3340
storeUnlockObject: key 'BA8D1FD8AECCBFEFC149B8D63E0D93C6' count=2
cbdataFree: 0x84d3340
cbdataFree: 0x84d3340 has 1 locks, not freeing
cbdataUnlock: 0x84d3340
cbdataUnlock: Freeing 0x84d3340
commCallCloseHandlers: ch->handler=0x8071a30
cbdataValid: 0x84d28e0
fwdServerClosed: FD 15 http://notes.fqdn.com/src/redirect.php
fwdStateFree: 0x84d28e0
storeLockObject: key 'BA8D1FD8AECCBFEFC149B8D63E0D93C6' count=3
creating rep: 0x84d7190
init-ing hdr: 0x84d71d0 owner: 2
0x84d71d0 lookup for 38
0x84d71d0 lookup for 9
0x84d71d0 lookup for 22
errorConvert: %U --> 'http://notes.fqdn.com/src/redirect.php'
errorConvert: %U --> 'http://notes.fqdn.com/src/redirect.php'
errorConvert: %E --> '(104) Connection reset by peer'
errorConvert: %w --> '[EMAIL PROTECTED]'
errorConvert: %w --> '[EMAIL PROTECTED]'
errorConvert: %T --> 'Mon, 10 Apr 2006 17:08:06 GMT'
errorConvert: %h --> 'new0.dkp.com'
errorConvert: %s --> 'squid/2.5.STABLE13'
errorConvert: %S --> '



Generated Mon, 10 Apr 2006 17:08:06 GMT by new0.dkp.com (squid/2.5.STABLE13)


'










logs-squid.tar.gz
Description: application/gunzip


[squid-users] reverse proxy ACL question.

2005-12-09 Thread Greg Whynott

Can you use ACL's when running in a reverse proxy config?

I've noticed internal IPs are not replaced with the external IP anymore 
after using a regex ACL.


squid v2.5s5

thanks,
greg



[squid-users] reverse proxy / ACL issues.

2005-12-07 Thread Greg Whynott

Hello,
The question:  Is there a way to use squid's rproxy feature with ACLs?  
Using ACLs in a reverse proxy mode seems to break server name / ip parsing.


-Version 2.5.STABLE5
-SUSE LINUX Enterprise Server 9 (i586)
-We are using squid in a reverse proxy config to allow a client to view 
pages on an internal web server which are related to the project we are 
working on for them. 
-The squid service sits out in the dmz. 
-Both the internal network and the dmz use private numbers.
-The internal web server is the front end to many internal services,  
which the client should not be able to view.


Things work as expected until I add an ACL.  When an ACL is added it 
seems as if the internal addresses are not replaced by the rproxy 
service anymore. 


For example:
without acls, if I load  (from the outside,  out on the internet) 
http://external.site.ip.com/projects/CLIENTX/foo.html and foo.html has a 
href which will take you elsewhere on the same internal server,  it 
works.  Viewing the source shows it has replaced the internal IPs with 
the external.site.ip.com's IP.


if I add an ACL,  the internal IPs are no longer replaced with the 
rproxy's IP.  instead the hrefs use the internal IPs.  The first page 
loads,  but any hrefs point to internal IPs.  This of course breaks 
things for the client.


Here is the ACL bits I've added to the conf file:  basically any url 
with the string "clientx" can be loaded,  everything else not.


#
# URLs WHICH CLIENT CAN LOAD -ggw
#
#acl clienturl url_regex -i clientx
#acl noview url_regex -i grid io rgrid
#
# apply acl rules
#
#http_access deny noview
#http_access allow clienturl
#

any thoughts?

thanks,
greg