RE: [squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-05-01 Thread Dean Weimer
Thanks Amos, I was looking at the 3.0 page for cache_peer definition since I am 
running 3.0 STABLE14, so I never saw those monitor options.  I am not running 
anything that requires the 3.0 branch so I could switch to 2.7 to solve this 
problem.  I would like to know if there are plans to include these options 
under the 3.x branches in the future?  As I would prefer my configuration 
doesn't depend on an option that will not be available in the foreseeable 
future.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Thursday, April 30, 2009 11:39 PM
To: Dean Weimer
Cc: crobert...@gci.net; squid-users@squid-cache.org
Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary Parent 
is Down.

Dean Weimer wrote:
 -Original Message-
 From: crobert...@gci.net [mailto:crobert...@gci.net] 
 Sent: Thursday, April 30, 2009 2:13 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary
 Parent is Down.
 
 Dean Weimer wrote:
 I have a current Parent child proxy configuration I have been testing,
 its working with the exception of some sites not failing over to second
 parent when primary parent goes down.
 In the test scenario I have 2 parent proxies, and one child proxy
 server, the parents are each configured twice using an alias IP address.
 This is done to load balance using round robin for the majority of web
 traffic yet allow some sites that we have identified to not work
 correctly with load balancing to go out a single parent proxy.
   
 
 Since Squid 2.6 there has been a parent selection method called 
 sourcehash, which will keep a client-to-parent-proxy relationship 
 until the parent fails.
 
 I considered this, but was concerned that after a failed proxy server,
 the majority of my load would be on one server, and not taking advantage
 of both links when the problem is resolved.
 
 The load balanced traffic works as expected, the dead parent is
 identified and ignored until it comes back online.  The traffic that
 cannot be load balanced is all using HTTPS (not sure HTTPS has anything
 to do with the problem or not), when I stop the parent proxy 10.50.20.7
 (aka 10.52.20.7) the round-robin configuration is promptly marked as
 dead.  However a website that has already been connected to that is in
 the NONBAL acl just returns the proxy error from the child giving a
 connect to (10.52.20.7) parent failed connection denied.
 
 Hmmm...  You might have to disable server_persistent_connections, or 
 lower the value of persistent_request_timeout to have a better response 
 rate to a parent failure with your current setup.
 
 Also considered this, but figured it would break some sites that are
 working successfully with load balancing because they create a
 persistent connection, and making the request timeout to low would
 becoming annoying to the users.  Also as the default is listed at 2
 minutes, I noticed that even after as much as 5 minutes that the
 connection would not fail over.
 
   It will not mark the non load balanced parent dead, closing and
 restarting the browser doesn't help.  It will change the status to dead
 however if I connect to another site in the NONBAL acl.  Going back to
 the first site, I can then connect, even though I have to log in again,
 which is expected and why these sites cannot be load balanced.
 Does anyone have any ideas short of writing some sort of test script
 that will cause the parent to be marked as dead, if it fails without any
 user intervention.
 Here is the cache peer configuration from the child proxy. FYI, I
 added the 5 sec timeout to see if it had any effect, and it didn't with
 the exception of speeding up the detection of the dead load balanced
 proxy.
 ## Define Parent Caches
 # Cache Peer Timeout
 peer_connect_timeout 5 seconds
 # Round Robin Caches
 cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
 cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
 # Non Load Balanced caches
 cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
 cache_peer 10.52.20.6 parent 8080 8181 name=DSL1

 ## Define Parent Cache Access rules
 # Access Control Lists
 acl NONBAL dstdomain /usr/local/squid/etc/nonbal.dns.list
 # Rules for the Control Lists
 cache_peer_access DSL2BAL allow !NONBAL
 cache_peer_access DSL1BAL allow !NONBAL
 cache_peer_access DSL2 allow NONBAL
 cache_peer_access DSL1 allow NONBAL

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co
 
 Chris
 
 I am currently doing some testing by creating access control lists for a
 couple nonexistent sub domains on our own domain.  This then just
 accesses the error page from the parent proxy for nonexistent domain, so
 it shouldn't put an unnecessary load on the internet links testing.
 Then allowing each one through one of the non balanced parents.  By
 accessing that page 

[squid-users] NTLM negotiate + IP authentication caching for heavily used squid deployments...

2009-05-01 Thread jason lawrence

Hi all,

  We are looking to replace our proxy infrastructure
with squid, but I am concerned about the performance of squid using the
ntlm_auth samba handler (specifically using negotiate).  We are almost
ready to do some Loadrunner stress tests, but I wanted to see if anyone
had some base statistics out there?  I saw that squid uses 
authenticate_ip_shortcircuit_ttl in 2.7.  Has anyone used the IP caching 
feature, and are there plans to
port it to the 3.x branch? (we are really after the ICAP functionallity)
 
I am currently investigating a way to push our AD user/group
listings into a flat file tree once a night, so squidGuard can do AD
group lookups locally.  That should save authorization cycles, but the
samba authentication is what really worries me...  I really wish
kerberos was prevelent enought that we could just force kerberos/Basic,
but that's the way it goes...  I don't even know how we could load test
kerberos authentication... 

Current deployment:
--

squid:   3.0.11
squidGuard:  1.4
samba: 3.2.7
server:  RHAS 5.2  64bit   (Sunfire 4100 8gigs of ram)


Current configs (squid is not yet optimized)

http_port xx.xx.x.xxx:8080
tcp_outgoing_address  xx.xx.x.xxx

#
#   AUTHENTICATION
#

auth_param negotiate program 
/dev/pkgs/linux/intel/squid/3.0.11/libexec/squid_kerb_auth -d
auth_param negotiate children 10
auth_param negotiate keep_alive on

auth_param ntlm program /dev/pkgs/linux/intel/samba/3.2.7/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10
auth_param ntlm keep_alive on

auth_param basic program /dev/pkgs/linux/intel/samba/3.2.7/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 6 hours
auth_param basic casesensitive off

#
#   ACL'S
#

acl manager proto cache_object
acl localhost src 127.0.0.1/32 
acl to_localhost dst 127.0.0.0/8 
acl CONNECT method CONNECT
acl java browser Java/1.3 Java/1.4
acl ntlm_users proxy_auth REQUIRED

http_access allow manager localhost
http_access deny manager
http_access allow java
http_access allow ntlm_users
http_access deny all

#
#   DNS Settings
#

dns_defnames on
dns_nameservers xx.xx.xx.xxx xx.xx.xx.xxx

#
#
#

hierarchy_stoplist cgi-bin ?
access_log /etrade/pkgs/linux/intel/squid/3.0.11/var/logs/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
coredump_dir /etrade/pkgs/linux/intel/squid/3.0.11/var/cache

#
#   Squid Guard
#

redirect_program /dev/pkgs/linux/intel/squidGuard/1.4/bin/squidGuard
redirect_children10

#
#   Kaspersky AV
#

icap_enable on
icap_send_client_ip on
icap_service is_kav_req reqmod_precache 0 icap://localhost:1344/av/reqmod
icap_service is_kav_resp respmod_precache 0 icap://localhost:1344/av/respmod
acl acl_kav_GET method GET
icap_class ic_kav_req is_kav_req
icap_class ic_kav_resp is_kav_resp
icap_access ic_kav_resp allow all
icap_access ic_kav_req allow all !acl_kav_GET




SAMBA
--

# global parameters
 [global]
# ADS membership settings
workgroup = DEV
netbios name = PROXYDEV1
realm = DEV.COM 
server string = proxydev1.dev.com 
security = ADS
winbind trusted domains only = yes
#winbind cache time = 600   # tested samba auth 
caching, doesn't work for ntlm_auth
#winbind offline logon = true  #
allow trusted domains = no
use kerberos keytab = true

# IdMap settings
winbind use default domain = yes
idmap domains = DEV
idmap config DEV:backend = nss
idmap 

Re: [squid-users] Transparent proxy with HTTPS on freebsd

2009-05-01 Thread abdul sami
Deal All,

So champs now the interesting part starts. ok

A few days ago we had proxy configured in the following way.

 DR Site
 \ int: bge0  int: bge1
internal netlan switch\Squid on
BSD-external firewallpublic net
  IP=X
\ IP=Y
  \

   Branches

1. Above diagram shows that our internal net  and DR site is
connected to squid on interface bge0 and uses transparent proxy
whereas branches come to bge1 and uses manual proxy to get access to
internet.

2. in above configuration http and https was working perfectly fine.

after that in our company major changes were made in network and in
result our proxy working scenario also changed as below.

DR site
   |
  int: bge0 | int: bge1
internal netlan switchSquid on
BSD|-external firewallpublic net
  IP=X
\ IP=Y
  \

   Branches

1. By network guys DR site traffic forcibly shifted to bge1, and
resultantly internet access at DR site stopped functioning.

2. my colleague who was previously looking proxy changed following
rule in ipfw file as below (as per his statement), and after that
internet access for http started working but https traffic stopped
working at both sides where transparent proxy was working i-e at DR
site and internal net, however https still work at branches.

RULE: ipfw add divert natd all from  any to any via bge1

CHANGED TO:

RULE: ipfw add divert natd all from internal net/24 to any via bge1

3. my network colleague told me that proxy is adding it's address as
source address to http packets but not to https, and passes https
packets with source address of internal net, which is ultimately
blocked at perimeter firewall.

now pls note that i have freshly started working on squid couple of
months has only passed.

so when https didn't run, i gone through documentation, forums etc
(one example is of your previous answers) and found that https would
not work on squid on transprent configuration  Got SURPRISED that how
it was working previously then. anyways now when i say this to my head
that squid on transparent proxy mode wont work for https he is not
ready to accept.

I argued with network colleagues that there must be some other
setttings had been done for https but the do not agree and say that we
had checked every thing and no such settings was there proxy was doing
all functionality,


Repeating Problem: Currently proxy adds it address as source to http
traffic but not https, in https case it simple forwards packets with
soruce address of internal net. and perimeter firewall allows proxy ip
traffic and drops internal net addresses, resultantly https does not
work.

So this is the whole story and i have got really stuck, what should i do.

.SUGGESTIONS DESPARITLY NEEDED.

With Regards,




On Thu, Apr 30, 2009 at 8:24 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 First of all let me Thank you v much to all for replies.

 i am searching/reading for PAC / port forwarding for squid on FreeBSD,
 but it would be grateful to me if you provide me an example/source.

 http://wiki.squid-cache.org/Technology/WPAD


 again i repeat i only want to allow https site like (gmail, yahoo)
 behind my transparent proxy to work.


 Once the requests are going to Squid properly this is a simple matter of
 ACLs.

 Amos




RE: [squid-users] Squid DENY access www.Symantec.com

2009-05-01 Thread panagiotis polychronopoulos

Actually the problem is that the squid can not permit access in web when the 
DNS name of web page is resolved in more than one IP address.
  
for example: nslookup www.symantec.com 

Server:  zeus.tesyd.teimes.gr
Address:  10.0.0.220
Non-authoritative answer:
Name:a568.d.akamai.net
Addresses:  212.205.43.22, 212.205.43.31, 212.205.43.38, 212.205.43.6
  212.205.43.29, 212.205.43.23
Aliases:  www.symantec.com, www.symantec.d4p.net
  symantec.georedirector.akadns.net
 
My local DNS server can reply in this query. 
What do I have to do for facing this problem? 


 Date: Fri, 1 May 2009 09:01:59 +0800
 From: pa...@arcor.de
 To: poly_pa...@hotmail.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid DENY access www.Symantec.com

 panagiotis polychronopoulos 写道:

 Hi to everyone
 I have a problem. The squid do not permit access to www.symantec.com portal 
 becouse can not resolve the DNS. How i could solve the mystery?


 use a correct DNS for squid or create a host entry for that domain name.

 Regards.
_
Έχετε Messenger; Έχετε και Windows Live. Μάθετε περισσότερα.
http://microsoft.com/windows/windowslive

Re: [squid-users] Transparent proxy with HTTPS on freebsd

2009-05-01 Thread abdul sami
Deal All,

So champs now the interesting part starts. ok

A few days ago we had proxy configured in the following way.

 DR Site
  \
   \ int: bge0  intt: bge1
internal net - lan switch - \Squid on BSD - firewall - public net
IP=X  \ IP=Y
  \
  Branches

1. Above diagram shows that our internal net  and DR site is
connected to squid on interface bge0 and uses transparent proxy
whereas branches come to bge1 and uses manual proxy to get access to
internet.

2. in above configuration http and https was working perfectly fine.

after that in our company major changes were made in network and in
result our proxy working scenario also changed as below.

DR site
   |
int: bge0   | int: bge1
internal net - lan switch - Squid on BSD| - firewall -  public net
   IP=X   \ IP=Y
  \
   Branches

1. By network guys DR site traffic forcibly shifted to bge1, and
resultantly internet access at DR site stopped functioning.

2. my colleague who was previously looking proxy changed following
rule in ipfw file as below (as per his statement), and after that
internet access for http started working but https traffic stopped
working at both sides where transparent proxy was working i-e at DR
site and internal net, however https still work at branches.

RULE: ipfw add divert natd all from  any to any via bge1

CHANGED TO:

RULE: ipfw add divert natd all from internal net/24 to any via bge1

3. my network colleague told me that proxy is adding it's address as
source address to http packets but not to https, and passes https
packets with source address of internal net, which is ultimately
blocked at perimeter firewall.

now pls note that i have freshly started working on squid couple of
months has only passed.

so when https didn't run, i gone through documentation, forums etc
(one example is of your previous answers) and found that https would
not work on squid on transprent configuration  Got SURPRISED that how
it was working previously then. anyways now when i say this to my head
that squid on transparent proxy mode wont work for https he is not
ready to accept.

I argued with network colleagues that there must be some other
setttings had been done for https but the do not agree and say that we
had checked every thing and no such settings was there proxy was doing
all functionality,


Repeating Problem: Currently proxy adds it address as source to http
traffic but not https, in https case it simple forwards packets with
soruce address of internal net. and perimeter firewall allows proxy ip
traffic and drops internal net addresses, resultantly https does not
work.

So this is the whole story and i have got really stuck, what should i do.

.SUGGESTIONS DESPARITLY NEEDED.

With Regards,





 On Thu, Apr 30, 2009 at 8:24 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 First of all let me Thank you v much to all for replies.

 i am searching/reading for PAC / port forwarding for squid on FreeBSD,
 but it would be grateful to me if you provide me an example/source.

 http://wiki.squid-cache.org/Technology/WPAD


 again i repeat i only want to allow https site like (gmail, yahoo)
 behind my transparent proxy to work.


 Once the requests are going to Squid properly this is a simple matter of
 ACLs.

 Amos





RE: [squid-users] squid on windows domain users

2009-05-01 Thread Dustin Hane
What type of IP conflicts? Is yoru DHCP server handing down the same IP address 
to different machines? 
You may want to have a setup similar to this:
IN your domain controller you have 2 subgroups under the COMPUTERS OU. 
Container 1 = Internet Access Allowed
Place all the PCS you want to have IP access allowed in this Container.
Container 2 = Not allowed
Place all the PCs without access here. 
Direct your DHCP server to hand out a certain range to Container 1 and a 
different range to container 2. Within squid, set up a src acl for container 1 
to allow. 
Follow?


 dear friends,
 
 i m from india  is using a 2 Mbps Leased Line
 connection, distributing it through windows 2003 server with
 squid  ip based filtering.
 frequently i suffer from problems like ip conflicts bcoz
 users who dnt hav internet facility track the ip on which
 internet is available  changes them.
 
 what is the remedy to this.
 is there a solution like this.
 for all computers that need to hav internet facility,
 should be in domain of the system on which squid is
 installed  only these will have internet facility 
 no other computer on LAN can access internet, dsnt matter
 what its IP is.

or there is a MAC based filtering available 4 windows in squid.
 
 whts ur opinions frnds?
 
 bye



  Now surf faster and smarter ! Check out the new Firefox 3 - Yahoo! 
Edition http://downloads.yahoo.com/in/firefox/




RE: [squid-users] squid on windows domain users

2009-05-01 Thread Dustin Hane
I'm sorry. I misunderstood that people were changing them. Best way to stop 
them from doing that, is to change the group policy settings in your domain 
controller to remove access to the network control panel for both users and 
local machines. 
Go to:
User Configuration - Network - Network Connections
Enable - Prohibit access to properties of components of LAN connection
Enable - Prohibit TCP/IP advanced configuration
Enable - Prohibit access to the Advanced Settings item on the Advanced Menu
Enable - Prohibit access to properties of a LAN connection
Enable - Prohibit access to the New Connection Wizard

Then apply this GPO to your COMPUTERS OU and you'll be all set. 

Thanks
Dustin

-Original Message-
From: Leonardo Carneiro [mailto:lscarne...@veltrac.com.br] 
Sent: Wednesday, April 29, 2009 9:45 AM
To: Vicks
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid on windows domain users

Hi Vicks,

i'm from brazil and i hope i can help u.
at the way i understood, users who don't have a full connection with 
the internet change their IPs to use the resources otherwise would be 
denied to them.

you have tons of ways to prevent this. one way is to use the domain 
login (with a external program) to do the filtering. it will prevent ip 
conflict in your network.

Vicks escreveu:
 dear friends,

 i m from india  is using a 2 Mbps Leased Line
 connection, distributing it through windows 2003 server with
 squid  ip based filtering.
 frequently i suffer from problems like ip conflicts bcoz
 users who dnt hav internet facility track the ip on which
 internet is available  changes them.

 what is the remedy to this.
 is there a solution like this.
 for all computers that need to hav internet facility,
 should be in domain of the system on which squid is
 installed  only these will have internet facility 
 no other computer on LAN can access internet, dsnt matter
 what its IP is.
 

 or there is a MAC based filtering available 4 windows in squid.
   
 whts ur opinions frnds?

 bye
 



   Now surf faster and smarter ! Check out the new Firefox 3 - Yahoo! 
 Edition http://downloads.yahoo.com/in/firefox/


   


-- 

*Leonardo de Souza Carneiro*
*Veltrac - Tecnologia em Logística.*
lscarne...@veltrac.com.br mailto:lscarne...@veltrac.com.br
http://www.veltrac.com.br http://www.veltrac.com.br/
/Fone Com.: (43)2105-5600/
/Av. Higienópolis 1601 Ed. Eurocenter Sl. 803/
/Londrina- PR/
/Cep: 86015-010/






[squid-users] OpenBSD / Transparent proxy and session helper :)

2009-05-01 Thread Parvinder Bhasin
Just wondering if this was a typo on the squid wiki page for OpenBSD /  
Transparent proxy example, in PF the port is routed to 3128 but inside  
squid config , its http_port 3129.  Is this correct?


if squid was set on 3129 and you were to spin off netcat to listen on  
port 3129 and then start squid, it will surely fail as the port is  
used by the netcat listner.

It would be great to know if this doc was correct or a typo.

http://wiki.squid-cache.org/ConfigExamples/Intercept/OpenBsdPf

My main question was to search for a session_helper example.  Is there  
a small session_helper example page out there that someone can point  
me to?


-Parvinder Bhasin


Re: [squid-users] External C program

2009-05-01 Thread Julien Philibin
On Thu, Apr 30, 2009 at 4:40 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Julien Philibin wrote:

 On Wed, Apr 29, 2009 at 11:15 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 Very interesting Bharath !!!

 Yes thank you. You have identified the issue and we can now tell Julien
 exactly what he has to do.

 What would be your advice to get my program working ?!

 Use fgets(). The scan() family apparently do not handle EOF in the way
 needed.

 Thus to work your code must be:

  char line[8196];
  char ip[45];
  char url[8196];

  ip[0] = '\0';
  url[0] = '\0';

  while( fgets(line, 8196, stdin) != NULL ) {
     snscanf(sbuf, 8196, %s %s ip, url);
     // happy joy 
  }

 Amos


 Hey that's smart! :)

 I'm going to go for that and if things go wrong, I'll let you know ...

 It is slightly wrong. The sbuf there should be 'line'.
 I hope your compiler catches that also.


Yep I found it out :)

 And please do use snscanf instead of scanf. It will save you from many
 security and segfault bugs over your coding time.


You are talking about snscanf, but nor man snscanf nor google are
showing me revelant stuff about this function ... Am I missing
something ? I am using sscanf instead, for now ...


 Thank you everyone!

 btw: Amos, any idea why I get a randomly 127.0.0.1 instead of my real
 Ip in the logs ?


 As someone said earlier 127.0.0.1 is one of the IPs assigned to your
 machine. It is a special IPv4 address assigned as localhost. Every machine
 with networking has that same IP for private non-Internet traffic use.

 Most machines will have two of these; 127.0.0.1 for IPv4 and ::1 for IPv6.
 They are identical in use and purpose for their own IP protocols.


 Why you get it randomly I don't know. I expect it to show up consistently
 for requests the OS identifies as local-machine only. And never for requests
 the OS thinks are remote global.

 If your testing uses localhost:3128 as the proxy it will connect to
 127.0.0.1 privately. If it uses the public IP or name resolving to the
 public IP it will use a global public connections.


I am using the public IP address to access my proxy. I'll try to
remove the 127.0.0.1 from the hosts file and see how things go on ...



 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Thank you everyone again for your time, and sorry for the delay in
getting back to you, I've been doing some researches and making my
hands dirty with my external helpers all week long!

Julien


Re: [squid-users] Squid DENY access www.Symantec.com

2009-05-01 Thread Jeff Pang

panagiotis polychronopoulos :

Actually the problem is that the squid can not permit access in web when the 
DNS name of web page is resolved in more than one IP address.
  


I don't think Squid has that problem for proxying a domain which has 
more than one IPs, unless the website has Session Persistence problems 
(even this, the site's hostmaster should be taking something to resolve 
that, not you).


If you doubt it still, you could put an entry in /etc/hosts to give that 
domain with only one IP then run Squid to test it.


Finally you may check Squid's logs carefully to find out what's the real 
problem.



--
Jeff Pang
DingTong Technology
www.dtonenetworks.com


RE: [squid-users] Parent Proxy's, Not Failing over when Primary Parent is Down.

2009-05-01 Thread Amos Jeffries
 Thanks Amos, I was looking at the 3.0 page for cache_peer definition since
 I am running 3.0 STABLE14, so I never saw those monitor options.  I am not
 running anything that requires the 3.0 branch so I could switch to 2.7 to
 solve this problem.  I would like to know if there are plans to include
 these options under the 3.x branches in the future?  As I would prefer my
 configuration doesn't depend on an option that will not be available in
 the foreseeable future.

Yes they are on the roadmap for porting. Just not in 3.0 or 3.1.

Amos


 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Thursday, April 30, 2009 11:39 PM
 To: Dean Weimer
 Cc: crobert...@gci.net; squid-users@squid-cache.org
 Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary
 Parent is Down.

 Dean Weimer wrote:
 -Original Message-
 From: crobert...@gci.net [mailto:crobert...@gci.net]
 Sent: Thursday, April 30, 2009 2:13 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Parent Proxy's, Not Failing over when Primary
 Parent is Down.

 Dean Weimer wrote:
 I have a current Parent child proxy configuration I have been testing,
 its working with the exception of some sites not failing over to second
 parent when primary parent goes down.
 In the test scenario I have 2 parent proxies, and one child proxy
 server, the parents are each configured twice using an alias IP address.
 This is done to load balance using round robin for the majority of web
 traffic yet allow some sites that we have identified to not work
 correctly with load balancing to go out a single parent proxy.


 Since Squid 2.6 there has been a parent selection method called
 sourcehash, which will keep a client-to-parent-proxy relationship
 until the parent fails.

 I considered this, but was concerned that after a failed proxy server,
 the majority of my load would be on one server, and not taking advantage
 of both links when the problem is resolved.

 The load balanced traffic works as expected, the dead parent is
 identified and ignored until it comes back online.  The traffic that
 cannot be load balanced is all using HTTPS (not sure HTTPS has anything
 to do with the problem or not), when I stop the parent proxy 10.50.20.7
 (aka 10.52.20.7) the round-robin configuration is promptly marked as
 dead.  However a website that has already been connected to that is in
 the NONBAL acl just returns the proxy error from the child giving a
 connect to (10.52.20.7) parent failed connection denied.

 Hmmm...  You might have to disable server_persistent_connections, or
 lower the value of persistent_request_timeout to have a better response
 rate to a parent failure with your current setup.

 Also considered this, but figured it would break some sites that are
 working successfully with load balancing because they create a
 persistent connection, and making the request timeout to low would
 becoming annoying to the users.  Also as the default is listed at 2
 minutes, I noticed that even after as much as 5 minutes that the
 connection would not fail over.

   It will not mark the non load balanced parent dead, closing and
 restarting the browser doesn't help.  It will change the status to dead
 however if I connect to another site in the NONBAL acl.  Going back to
 the first site, I can then connect, even though I have to log in again,
 which is expected and why these sites cannot be load balanced.
 Does anyone have any ideas short of writing some sort of test script
 that will cause the parent to be marked as dead, if it fails without any
 user intervention.
 Here is the cache peer configuration from the child proxy. FYI, I
 added the 5 sec timeout to see if it had any effect, and it didn't with
 the exception of speeding up the detection of the dead load balanced
 proxy.
 ## Define Parent Caches
 # Cache Peer Timeout
 peer_connect_timeout 5 seconds
 # Round Robin Caches
 cache_peer 10.50.20.7 parent 8080 8181 name=DSL2BAL round-robin
 cache_peer 10.50.20.6 parent 8080 8181 name=DSL1BAL round-robin
 # Non Load Balanced caches
 cache_peer 10.52.20.7 parent 8080 8181 name=DSL2
 cache_peer 10.52.20.6 parent 8080 8181 name=DSL1

 ## Define Parent Cache Access rules
 # Access Control Lists
 acl NONBAL dstdomain /usr/local/squid/etc/nonbal.dns.list
 # Rules for the Control Lists
 cache_peer_access DSL2BAL allow !NONBAL
 cache_peer_access DSL1BAL allow !NONBAL
 cache_peer_access DSL2 allow NONBAL
 cache_peer_access DSL1 allow NONBAL

 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co

 Chris

 I am currently doing some testing by creating access control lists for a
 couple nonexistent sub domains on our own domain.  This then just
 accesses the error page from the parent proxy for nonexistent domain, so
 it shouldn't put an unnecessary load on the internet links testing.
 Then allowing each one 

Re: [squid-users] Transparent proxy with HTTPS on freebsd

2009-05-01 Thread Amos Jeffries
 Deal All,

 So champs now the interesting part starts. ok

 A few days ago we had proxy configured in the following way.

  DR Site
  \ int: bge0  int:
 bge1
 internal netlan switch\Squid on
 BSD-external firewallpublic net
   IP=X
 \ IP=Y
   \

Branches

 1. Above diagram shows that our internal net  and DR site is
 connected to squid on interface bge0 and uses transparent proxy
 whereas branches come to bge1 and uses manual proxy to get access to
 internet.

 2. in above configuration http and https was working perfectly fine.

 after that in our company major changes were made in network and in
 result our proxy working scenario also changed as below.

 DR
 site
|
   int: bge0 | int:
 bge1
 internal netlan switchSquid on
 BSD|-external firewallpublic net
   IP=X
 \ IP=Y
   \

Branches

 1. By network guys DR site traffic forcibly shifted to bge1, and
 resultantly internet access at DR site stopped functioning.

 2. my colleague who was previously looking proxy changed following
 rule in ipfw file as below (as per his statement), and after that
 internet access for http started working but https traffic stopped
 working at both sides where transparent proxy was working i-e at DR
 site and internal net, however https still work at branches.

 RULE: ipfw add divert natd all from  any to any via bge1

 CHANGED TO:

 RULE: ipfw add divert natd all from internal net/24 to any via bge1

 3. my network colleague told me that proxy is adding it's address as
 source address to http packets but not to https, and passes https
 packets with source address of internal net, which is ultimately
 blocked at perimeter firewall.

 now pls note that i have freshly started working on squid couple of
 months has only passed.

 so when https didn't run, i gone through documentation, forums etc
 (one example is of your previous answers) and found that https would
 not work on squid on transprent configuration  Got SURPRISED that how
 it was working previously then. anyways now when i say this to my head
 that squid on transparent proxy mode wont work for https he is not
 ready to accept.

 I argued with network colleagues that there must be some other
 setttings had been done for https but the do not agree and say that we
 had checked every thing and no such settings was there proxy was doing
 all functionality,


 Repeating Problem: Currently proxy adds it address as source to http
 traffic but not https, in https case it simple forwards packets with
 soruce address of internal net. and perimeter firewall allows proxy ip
 traffic and drops internal net addresses, resultantly https does not
 work.

 So this is the whole story and i have got really stuck, what should i
 do.


Please note:
 HTTPS forwarding sounds like it is being done by the OS routing on the
proxy box. Not by the proxy software itself.

 Also using the WPAD solution I already proposed will make the clients go
through the proxy software. With same effects and controls as HTTP
traffic.

The other proper solution is for the main firewall to be updated to allow
the appropriate internal IPs to use HTTPS port 443 outbound.

One hack, which itself will break eventually and meanwhile has hole of its
own ... is to configure the proxy box firewall with those same IPs which
should be allowed HTTPS and source-NAT them to the proxy Box IP. Be
careful you only allow the acceptable IPs through this NAT though.


Amos



RE: [squid-users] Squid DENY access www.Symantec.com

2009-05-01 Thread Amos Jeffries

 Actually the problem is that the squid can not permit access in web when
 the DNS name of web page is resolved in more than one IP address.

 for example: nslookup www.symantec.com

 Server:  zeus.tesyd.teimes.gr
 Address:  10.0.0.220
 Non-authoritative answer:
 Name:a568.d.akamai.net
 Addresses:  212.205.43.22, 212.205.43.31, 212.205.43.38, 212.205.43.6
   212.205.43.29, 212.205.43.23
 Aliases:  www.symantec.com, www.symantec.d4p.net
   symantec.georedirector.akadns.net

 My local DNS server can reply in this query.
 What do I have to do for facing this problem?

I don't think it's the IPs which are the problem.

Since DNS admin started getting paranoid about recursive resolvers this
year it's become common to see recursive resolvers which are unable to
recurse CNAME results properly.

It looks to me like www.symantec.com chains through two CNAME domains (
www.symantec.com - www.symantec.d4p.net -
symantec.georedirector.akadns.net ) in order to retrieve those IPs. Squid
depends on the DNS resolver to do any such recursion.
nslookup and such tools will often 'helpfully' do the recursion themselves
when the DNS resolver fails.

Amos


 
 Date: Fri, 1 May 2009 09:01:59 +0800
 From: pa...@arcor.de
 To: poly_pa...@hotmail.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid DENY access www.Symantec.com

 panagiotis polychronopoulos 写道:

 Hi to everyone
 I have a problem. The squid do not permit access to www.symantec.com
 portal becouse can not resolve the DNS. How i could solve the mystery?


 use a correct DNS for squid or create a host entry for that domain
 name.

 Regards.
 _
 Έχετε Messenger; Έχετε και Windows Live. Μάθετε
 περισσότερα.
 http://microsoft.com/windows/windowslive




Re: [squid-users] External C program

2009-05-01 Thread Amos Jeffries
 On Thu, Apr 30, 2009 at 4:40 AM, Amos Jeffries squ...@treenet.co.nz
 wrote:
 Julien Philibin wrote:

 On Wed, Apr 29, 2009 at 11:15 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 Very interesting Bharath !!!

 Yes thank you. You have identified the issue and we can now tell
 Julien
 exactly what he has to do.

 What would be your advice to get my program working ?!

 Use fgets(). The scan() family apparently do not handle EOF in the way
 needed.

 Thus to work your code must be:

  char line[8196];
  char ip[45];
  char url[8196];

  ip[0] = '\0';
  url[0] = '\0';

  while( fgets(line, 8196, stdin) != NULL ) {
     snscanf(sbuf, 8196, %s %s ip, url);
     // happy joy 
  }

 Amos


 Hey that's smart! :)

 I'm going to go for that and if things go wrong, I'll let you know ...

 It is slightly wrong. The sbuf there should be 'line'.
 I hope your compiler catches that also.


 Yep I found it out :)

 And please do use snscanf instead of scanf. It will save you from many
 security and segfault bugs over your coding time.


 You are talking about snscanf, but nor man snscanf nor google are
 showing me revelant stuff about this function ... Am I missing
 something ? I am using sscanf instead, for now ...


It's not common. Some compilers don't have it.
Exactly the same semantics as sscanf, but lets you limit the length of
buffer scanned so no need to nul-terminate the buffer.


 Thank you everyone!

 btw: Amos, any idea why I get a randomly 127.0.0.1 instead of my real
 Ip in the logs ?


 As someone said earlier 127.0.0.1 is one of the IPs assigned to your
 machine. It is a special IPv4 address assigned as localhost. Every
 machine
 with networking has that same IP for private non-Internet traffic use.

 Most machines will have two of these; 127.0.0.1 for IPv4 and ::1 for
 IPv6.
 They are identical in use and purpose for their own IP protocols.


 Why you get it randomly I don't know. I expect it to show up
 consistently
 for requests the OS identifies as local-machine only. And never for
 requests
 the OS thinks are remote global.

 If your testing uses localhost:3128 as the proxy it will connect to
 127.0.0.1 privately. If it uses the public IP or name resolving to the
 public IP it will use a global public connections.


 I am using the public IP address to access my proxy. I'll try to
 remove the 127.0.0.1 from the hosts file and see how things go on ...



 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


 Thank you everyone again for your time, and sorry for the delay in
 getting back to you, I've been doing some researches and making my
 hands dirty with my external helpers all week long!

 Julien