Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread XUFENG
Hi  Sami,

In this case,please refer to 
http://www.linux-ha.org/
It will help you out with the heartbeat tool they released.
You can have your squid servers listen on one virtual ip address(vip) when the 
primary one goes down,the ip address(vip) will be set on the secondary one by 
the heartbeat tool automatically.  

--   
XUFENG
2009-06-15

-
发件人:abdul sami
发送日期:2009-06-15 13:47:09
收件人:squid-users
抄送:
主题:[squid-users] How to setup squid proxy to run in fail-over mode

Dear all,

Now that i have setup a proxy server, as a next step i want to run it
in fail-over high availability mode, so that if one proxy is down due
to any reason, second proxy should automatically be up and start
serving requests.

any help in shape of articles/steps would be highly appreciated.

Thanks and regards,

A Sami




Re: [squid-users] [Repost] Querying and Extraction from a Squid Cache Directory

2009-06-15 Thread Genaro Flores

Ah, thanks. Sorry for being impatient. I'll be waiting.

--On Sunday, June 14, 2009 18:04 +1200 Amos Jeffries squ...@treenet.co.nz 
wrote:



Genaro Flores wrote:

Reposting this in the hope that someone considers it. Even if you don't
have a definite answer or the answer is negative please do give me a
short reply so that I know the question has been considered by someone.
Thanks again.


1) its a weekend.

2) yes its getting to people, if anyone has an answer they will post it.

Amos




Dear List,

I am using the latest stable release of the native NT port of Squid. I
would like to know if there are tools for querying an existing cache
directory structure and for extracting desired original objects sans
headers. I was directed to ufsdump and cossdump on #sq...@freenode.net
but those don't seem to be available with the NT port and I couldn't
find online documentation for them so I am also at loss as to whether
or not they perform the tasks just described and whether NT ports
exist. Please kindly inform me of the existence and state of any such
tools, preferably for NT systems. Links to ufsdump documentation will
also be somewhat helpful.

Thanks in advance.



--
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
   Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1







RE: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread Mario Remy Almeida
Hi Sagar,

Just a Question?

How can a DNS server determine that the primary server is down and it
should resolve the secondary server IP?

//Remy

On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
 Hi Abdul,
 
 Please try to enter 2 different IPs in the DNS  
 
 10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both the
 servers.)
 10.xxx.yyy.zz2 (proxyA) as secondary.
 
 Start squid services on both the servers (Primary  Secondary)
 
 If Primary server fails, the DNS will resolve secondary IP for proxyA  the
 squid on second server will kick in automatically..
 
 Hope am able to explain it properly.
 
 Regards,
 
 Sagar Navalkar
 
 
 -Original Message-
 From: abdul sami [mailto:sami.me...@gmail.com] 
 Sent: Monday, June 15, 2009 11:17 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] How to setup squid proxy to run in fail-over mode
 
 Dear all,
 
 Now that i have setup a proxy server, as a next step i want to run it
 in fail-over high availability mode, so that if one proxy is down due
 to any reason, second proxy should automatically be up and start
 serving requests.
 
 any help in shape of articles/steps would be highly appreciated.
 
 Thanks and regards,
 
 A Sami
 



--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


RE: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread Sagar Navalkar
Hey Remy,

The DNS server does not determine which server is down, however If It is
unable to resolve the 1st entry, it will automatically go down to the 2nd
entry.

Regards,

Sagar Navalkar
Team Leader


-Original Message-
From: Mario Remy Almeida [mailto:malme...@isaaviation.ae] 
Sent: Monday, June 15, 2009 1:36 PM
To: Sagar Navalkar
Cc: squid-users@squid-cache.org; 'abdul sami'
Subject: RE: [squid-users] How to setup squid proxy to run in fail-over mode

Hi Sagar,

Just a Question?

How can a DNS server determine that the primary server is down and it
should resolve the secondary server IP?

//Remy

On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
 Hi Abdul,
 
 Please try to enter 2 different IPs in the DNS  
 
 10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both the
 servers.)
 10.xxx.yyy.zz2 (proxyA) as secondary.
 
 Start squid services on both the servers (Primary  Secondary)
 
 If Primary server fails, the DNS will resolve secondary IP for proxyA 
the
 squid on second server will kick in automatically..
 
 Hope am able to explain it properly.
 
 Regards,
 
 Sagar Navalkar
 
 
 -Original Message-
 From: abdul sami [mailto:sami.me...@gmail.com] 
 Sent: Monday, June 15, 2009 11:17 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] How to setup squid proxy to run in fail-over mode
 
 Dear all,
 
 Now that i have setup a proxy server, as a next step i want to run it
 in fail-over high availability mode, so that if one proxy is down due
 to any reason, second proxy should automatically be up and start
 serving requests.
 
 any help in shape of articles/steps would be highly appreciated.
 
 Thanks and regards,
 
 A Sami
 




--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such
problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on
your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended
solely
for the addressee and is confidential. If you are not the intended
recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.



[squid-users] Load Balancing Query

2009-06-15 Thread Mario Remy Almeida
Hi All,

Want to know if load balancing is possible with squid by maintaining
sessions.
Health check should be TCP Ports

eg:
Server A - Active port 8080
Server B - Active port 8080

Client - Squid - Server A and/or B

Request 1 comes from 'Client A' Squid forwards the request to 'Server A'
Request 2 comes from 'Client A' Squid forwards the request to 'Server A'
and so on
any further request from 'Client A' squid should only forward to 'Server
A' until the session is same

if

Request 1 comes from 'Client B' Squid forwards the request to 'Server B'
Request 2 comes from 'Client B' Squid forwards the request to 'Server B'

if 'Server A' fails Squid should forward all the request to 'Server B'

//Remy



--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


RE: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread Mario Remy Almeida
That is what I am saying.

Since you say

If Primary server fails, the DNS will resolve secondary IP for proxyA

//Remy


On Mon, 2009-06-15 at 14:39 +0530, Sagar Navalkar wrote:
 Hey Remy,
 
 The DNS server does not determine which server is down, however If It is
 unable to resolve the 1st entry, it will automatically go down to the 2nd
 entry.
 
 Regards,
 
 Sagar Navalkar
 Team Leader
 
 
 -Original Message-
 From: Mario Remy Almeida [mailto:malme...@isaaviation.ae] 
 Sent: Monday, June 15, 2009 1:36 PM
 To: Sagar Navalkar
 Cc: squid-users@squid-cache.org; 'abdul sami'
 Subject: RE: [squid-users] How to setup squid proxy to run in fail-over mode
 
 Hi Sagar,
 
 Just a Question?
 
 How can a DNS server determine that the primary server is down and it
 should resolve the secondary server IP?
 
 //Remy
 
 On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
  Hi Abdul,
  
  Please try to enter 2 different IPs in the DNS  
  
  10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both the
  servers.)
  10.xxx.yyy.zz2 (proxyA) as secondary.
  
  Start squid services on both the servers (Primary  Secondary)
  
  If Primary server fails, the DNS will resolve secondary IP for proxyA 
 the
  squid on second server will kick in automatically..
  
  Hope am able to explain it properly.
  
  Regards,
  
  Sagar Navalkar
  
  
  -Original Message-
  From: abdul sami [mailto:sami.me...@gmail.com] 
  Sent: Monday, June 15, 2009 11:17 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] How to setup squid proxy to run in fail-over mode
  
  Dear all,
  
  Now that i have setup a proxy server, as a next step i want to run it
  in fail-over high availability mode, so that if one proxy is down due
  to any reason, second proxy should automatically be up and start
  serving requests.
  
  any help in shape of articles/steps would be highly appreciated.
  
  Thanks and regards,
  
  A Sami
  
 
 
 
 
 --
 Disclaimer and Confidentiality
 
 
 This material has been checked for  computer viruses and although none has
 been found, we cannot guarantee  that it is completely free from such
 problems
 and do not accept any  liability for loss or damage which may be caused.
 Please therefore  check any attachments for viruses before using them on
 your
 own  equipment. If you do find a computer virus please inform us immediately
 so that we may take appropriate action. This communication is intended
 solely
 for the addressee and is confidential. If you are not the intended
 recipient,
 any disclosure, copying, distribution or any action  taken or omitted to be
 taken in reliance on it, is prohibited and may be  unlawful. The views
 expressed in this message are those of the  individual sender, and may not
 necessarily be that of ISA.
 


--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


RE: [squid-users] certain pages loading correctly in Firefox but not IE

2009-06-15 Thread Stand H



--- On Sun, 6/14/09, Amos Jeffries squ...@treenet.co.nz wrote:

 From: Amos Jeffries squ...@treenet.co.nz
 Subject: RE: [squid-users] certain pages loading correctly in Firefox but not 
 IE
 To: Timothy Larrea webmas...@wccs.nsw.edu.au
 Cc: squid-users@squid-cache.org
 Date: Sunday, June 14, 2009, 4:14 PM
 On Mon, 15 Jun 2009 08:39:53 +1000,
 Timothy Larrea
 webmas...@wccs.nsw.edu.au
 wrote:
  That's what I thought initially, however the pages
 load fine in both
  browsers when the proxy server is bypassed
 completely.
 
 You are going to have to compare the headers sent by
 firefox and IE. There
 are HTTP/1.1 things that IE does that Squid cannot cope
 with in older
 versions. Then there are things that older Squid like 2.6
 do that they
 shouldn't. The only way to know is to look deeper than it
 doesn't work.
 
 Amos
 
  
  -Original Message-
  From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 
  Sent: Friday, 12 June 2009 8:32 PM
  To: Timothy Larrea
  Cc: squid-users@squid-cache.org
  Subject: Re: [squid-users] certain pages loading
 correctly in Firefox
  but not IE
  
  Timothy Larrea wrote:
  Hi All,
  
  we currently have a squid proxy (2.6.stable5 )
 running, and it seems
  that certain pages, such as our google docs site,
 and youtube don't
  load
  correctly when using IE as the browser, but
 Firefox is fine.  In IE,
  the
  page loads all the text, but it seems to be
 missing the CSS data and
  javascripts, so the text is large and all over the
 place. Another odd
  thing is that if you attempt to load a page, close
 IE, then reopen it
  and load that page again, it works 2nd time
 around.  I've tested this
  on
  a clean XP install with IE6 IE7, Vista, Windows 7
 etc.
  
  Any suggestions would be appreciated.
  
  IE has trouble loading things sometimes. That other
 browsers can get it 
  shows its unlikely to be a Squid issue.
  
  Look at the headers being sent by each browser in
 their requests for the
  
  CSS and compare.
  
  Amos
 

Hi,

I also have seen this weird behavior with IE when I try to cache dynamic 
content. Other are all ok.

Regards,
Stand  





Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread Luis Daniel Lucio Quiroz
There are 2 ways as far as I know to do this possible:

1. Use de WPAD protocol: lets say PROXY squid1; PROXY squid2 (this is fail 
over)
2. Use an HA solution such as Ultramonkey3. Here you could do Active-Active.

Kind regards,

LD
Le lundi 15 juin 2009 11:09:28, Sagar Navalkar a écrit :
 Hey Remy,

 The DNS server does not determine which server is down, however If It is
 unable to resolve the 1st entry, it will automatically go down to the 2nd
 entry.

 Regards,

 Sagar Navalkar
 Team Leader


 -Original Message-
 From: Mario Remy Almeida [mailto:malme...@isaaviation.ae]
 Sent: Monday, June 15, 2009 1:36 PM
 To: Sagar Navalkar
 Cc: squid-users@squid-cache.org; 'abdul sami'
 Subject: RE: [squid-users] How to setup squid proxy to run in fail-over
 mode

 Hi Sagar,

 Just a Question?

 How can a DNS server determine that the primary server is down and it
 should resolve the secondary server IP?

 //Remy

 On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
  Hi Abdul,
 
  Please try to enter 2 different IPs in the DNS 
 
  10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both
  the servers.)
  10.xxx.yyy.zz2 (proxyA) as secondary.
 
  Start squid services on both the servers (Primary  Secondary)
 
  If Primary server fails, the DNS will resolve secondary IP for proxyA 

 the

  squid on second server will kick in automatically..
 
  Hope am able to explain it properly.
 
  Regards,
 
  Sagar Navalkar
 
 
  -Original Message-
  From: abdul sami [mailto:sami.me...@gmail.com]
  Sent: Monday, June 15, 2009 11:17 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] How to setup squid proxy to run in fail-over mode
 
  Dear all,
 
  Now that i have setup a proxy server, as a next step i want to run it
  in fail-over high availability mode, so that if one proxy is down due
  to any reason, second proxy should automatically be up and start
  serving requests.
 
  any help in shape of articles/steps would be highly appreciated.
 
  Thanks and regards,
 
  A Sami

 ---
- --
 Disclaimer and Confidentiality


 This material has been checked for  computer viruses and although none has
 been found, we cannot guarantee  that it is completely free from such
 problems
 and do not accept any  liability for loss or damage which may be caused.
 Please therefore  check any attachments for viruses before using them on
 your
 own  equipment. If you do find a computer virus please inform us
 immediately so that we may take appropriate action. This communication is
 intended solely
 for the addressee and is confidential. If you are not the intended
 recipient,
 any disclosure, copying, distribution or any action  taken or omitted to be
 taken in reliance on it, is prohibited and may be  unlawful. The views
 expressed in this message are those of the  individual sender, and may not
 necessarily be that of ISA.


Re: [squid-users] Load Balancing Query

2009-06-15 Thread Amos Jeffries

Mario Remy Almeida wrote:

Hi All,

Want to know if load balancing is possible with squid by maintaining
sessions.
Health check should be TCP Ports

eg:
Server A - Active port 8080
Server B - Active port 8080

Client - Squid - Server A and/or B

Request 1 comes from 'Client A' Squid forwards the request to 'Server A'
Request 2 comes from 'Client A' Squid forwards the request to 'Server A'
and so on
any further request from 'Client A' squid should only forward to 'Server
A' until the session is same

if

Request 1 comes from 'Client B' Squid forwards the request to 'Server B'
Request 2 comes from 'Client B' Squid forwards the request to 'Server B'

if 'Server A' fails Squid should forward all the request to 'Server B'

//Remy




HTTP is stateless. It contains no such thing as sessions. That is a 
browser feature.


What you are looking for is something like CARP or sourcehash peering 
algorithms. They keep all requests for certain URLs sent to the same 
place (CARP) or all requests for the same IP to the same place (sourcehash).


see
http://www.squid-cache.org/Doc/config/cache_peer


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread Muhammad Sharfuddin
just a question

2. Use an HA solution such as Ultramonkey3. Here you could do
Active-Active.
Why Ultramonkey3.. why not HA from http://www.linux-ha.org/

-Sharfuddin

A PC is like a aircondition. If you open Windows it just don't funktion
properly anymore

On Mon, 2009-06-15 at 12:12 +0200, Luis Daniel Lucio Quiroz wrote:
 There are 2 ways as far as I know to do this possible:
 
 1. Use de WPAD protocol: lets say PROXY squid1; PROXY squid2 (this is fail 
 over)
 2. Use an HA solution such as Ultramonkey3. Here you could do Active-Active.
 
 Kind regards,
 
 LD
 Le lundi 15 juin 2009 11:09:28, Sagar Navalkar a écrit :
  Hey Remy,
 
  The DNS server does not determine which server is down, however If It is
  unable to resolve the 1st entry, it will automatically go down to the 2nd
  entry.
 
  Regards,
 
  Sagar Navalkar
  Team Leader
 
 
  -Original Message-
  From: Mario Remy Almeida [mailto:malme...@isaaviation.ae]
  Sent: Monday, June 15, 2009 1:36 PM
  To: Sagar Navalkar
  Cc: squid-users@squid-cache.org; 'abdul sami'
  Subject: RE: [squid-users] How to setup squid proxy to run in fail-over
  mode
 
  Hi Sagar,
 
  Just a Question?
 
  How can a DNS server determine that the primary server is down and it
  should resolve the secondary server IP?
 
  //Remy
 
  On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
   Hi Abdul,
  
   Please try to enter 2 different IPs in the DNS 
  
   10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both
   the servers.)
   10.xxx.yyy.zz2 (proxyA) as secondary.
  
   Start squid services on both the servers (Primary  Secondary)
  
   If Primary server fails, the DNS will resolve secondary IP for proxyA 
 
  the
 
   squid on second server will kick in automatically..
  
   Hope am able to explain it properly.
  
   Regards,
  
   Sagar Navalkar
  
  
   -Original Message-
   From: abdul sami [mailto:sami.me...@gmail.com]
   Sent: Monday, June 15, 2009 11:17 AM
   To: squid-users@squid-cache.org
   Subject: [squid-users] How to setup squid proxy to run in fail-over mode
  
   Dear all,
  
   Now that i have setup a proxy server, as a next step i want to run it
   in fail-over high availability mode, so that if one proxy is down due
   to any reason, second proxy should automatically be up and start
   serving requests.
  
   any help in shape of articles/steps would be highly appreciated.
  
   Thanks and regards,
  
   A Sami
 
  ---
 - --
  Disclaimer and Confidentiality
 
 
  This material has been checked for  computer viruses and although none has
  been found, we cannot guarantee  that it is completely free from such
  problems
  and do not accept any  liability for loss or damage which may be caused.
  Please therefore  check any attachments for viruses before using them on
  your
  own  equipment. If you do find a computer virus please inform us
  immediately so that we may take appropriate action. This communication is
  intended solely
  for the addressee and is confidential. If you are not the intended
  recipient,
  any disclosure, copying, distribution or any action  taken or omitted to be
  taken in reliance on it, is prohibited and may be  unlawful. The views
  expressed in this message are those of the  individual sender, and may not
  necessarily be that of ISA.
 



Re: [squid-users] Load Balancing Query

2009-06-15 Thread Mario Remy Almeida
Hi Amos,

Thanks for that,

so I need to use carp and sourcehash to do load balancing, right?

but where do I specify in squid to monitor the prots?

I mean if port 8080 is down on 'ServerA' how Squid will know that it
should send the request to 'ServerB' on port 8080?

//Remy

On Mon, 2009-06-15 at 23:05 +1200, Amos Jeffries wrote:
 Mario Remy Almeida wrote:
  Hi All,
  
  Want to know if load balancing is possible with squid by maintaining
  sessions.
  Health check should be TCP Ports
  
  eg:
  Server A - Active port 8080
  Server B - Active port 8080
  
  Client - Squid - Server A and/or B
  
  Request 1 comes from 'Client A' Squid forwards the request to 'Server A'
  Request 2 comes from 'Client A' Squid forwards the request to 'Server A'
  and so on
  any further request from 'Client A' squid should only forward to 'Server
  A' until the session is same
  
  if
  
  Request 1 comes from 'Client B' Squid forwards the request to 'Server B'
  Request 2 comes from 'Client B' Squid forwards the request to 'Server B'
  
  if 'Server A' fails Squid should forward all the request to 'Server B'
  
  //Remy
  
 
 
 HTTP is stateless. It contains no such thing as sessions. That is a 
 browser feature.
 
 What you are looking for is something like CARP or sourcehash peering 
 algorithms. They keep all requests for certain URLs sent to the same 
 place (CARP) or all requests for the same IP to the same place (sourcehash).
 
 see
 http://www.squid-cache.org/Doc/config/cache_peer
 
 
 Amos

 


--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


[squid-users] NONE/411 Length Required

2009-06-15 Thread Bijayant Kumar

Hello list,

I have Squid version 3.0.STABLE 10 installed on Gentoo linux box. All things 
are working fine, means caching proxying etc. There is a problem with some 
sites. When I am accessing one of those sites, in access.log I am getting

NONE/411 3692 POST 
http://.justdial.com/autosuggest_category_query_main.php? - NONE/- text/html

And on the webpage I am getting whole error page of squid. Actually its a 
search related page. In the search criteria field as soon as I am typing after 
two words I am getting this error. The website in a question is 
http://justdial.com;. But it works without the Squid.


I tried to capture the http headers also which are as below

http://.justdial.com/autosuggest_category_query_main.php?city=Bangaloresearch=Ka



POST /autosuggest_category_query_main.php?city=Bangaloresearch=Ka HTTP/1.1

Host: .justdial.com

User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.16) Gecko/20080807 
Firefox/2.0.0.16

Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

Accept-Language: en-us,en;q=0.7,hi;q=0.3

Accept-Encoding: gzip,deflate

Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive: 300

Connection: keep-alive

Referer: http://.justdial.com/

Cookie: PHPSESSID=d1d12004187d4bf1f084a1252ec46cef; 
__utma=79653650.2087995718.1245064656.1245064656.1245064656.1; __utmb=79653650; 
__utmc=79653650; 
__utmz=79653650.1245064656.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); 
CITY=Bangalore

Pragma: no-cache

Cache-Control: no-cache



HTTP/1.x 411 Length Required

Server: squid/3.0.STABLE10

Mime-Version: 1.0

Date: Mon, 15 Jun 2009 11:18:10 GMT

Content-Type: text/html

Content-Length: 3287

Expires: Mon, 15 Jun 2009 11:18:10 GMT

X-Squid-Error: ERR_INVALID_REQ 0

X-Cache: MISS from bijayant.kavach.blr

X-Cache-Lookup: NONE from bijayant.kavach.blr:3128

Via: 1.0 bijayant.kavach.blr (squid/3.0.STABLE10)

Proxy-Connection: close

Please suggest me what could be the reason and how to resolve this. Any 
help/pointer can be a very helpful for me. 


Bijayant Kumar


  Get your new Email address!
Grab the Email name you#39;ve always wanted before someone else does!
http://mail.promotions.yahoo.com/newdomains/aa/


Re: [squid-users] NONE/411 Length Required

2009-06-15 Thread Muhammad Sharfuddin
# squid -v
Squid Cache: Version 2.5.STABLE12
OS: SUSE Linux Enterprise 10 SP 2

I just test, and got the same errors you posted

-Sharfuddin

A PC is like a aircondition. If you open Windows it just don't funktion
properly anymore

On Mon, 2009-06-15 at 04:35 -0700, Bijayant Kumar wrote:
 Hello list,
 
 I have Squid version 3.0.STABLE 10 installed on Gentoo linux box. All things 
 are working fine, means caching proxying etc. There is a problem with some 
 sites. When I am accessing one of those sites, in access.log I am getting
 
 NONE/411 3692 POST 
 http://.justdial.com/autosuggest_category_query_main.php? - NONE/- 
 text/html
 
 And on the webpage I am getting whole error page of squid. Actually its a 
 search related page. In the search criteria field as soon as I am typing 
 after two words I am getting this error. The website in a question is 
 http://justdial.com;. But it works without the Squid.
 
 
 I tried to capture the http headers also which are as below
 
 http://.justdial.com/autosuggest_category_query_main.php?city=Bangaloresearch=Ka
 
 
 
 POST /autosuggest_category_query_main.php?city=Bangaloresearch=Ka HTTP/1.1
 
 Host: .justdial.com
 
 User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.16) 
 Gecko/20080807 Firefox/2.0.0.16
 
 Accept: 
 text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
 
 Accept-Language: en-us,en;q=0.7,hi;q=0.3
 
 Accept-Encoding: gzip,deflate
 
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
 
 Keep-Alive: 300
 
 Connection: keep-alive
 
 Referer: http://.justdial.com/
 
 Cookie: PHPSESSID=d1d12004187d4bf1f084a1252ec46cef; 
 __utma=79653650.2087995718.1245064656.1245064656.1245064656.1; 
 __utmb=79653650; __utmc=79653650; 
 __utmz=79653650.1245064656.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); 
 CITY=Bangalore
 
 Pragma: no-cache
 
 Cache-Control: no-cache
 
 
 
 HTTP/1.x 411 Length Required
 
 Server: squid/3.0.STABLE10
 
 Mime-Version: 1.0
 
 Date: Mon, 15 Jun 2009 11:18:10 GMT
 
 Content-Type: text/html
 
 Content-Length: 3287
 
 Expires: Mon, 15 Jun 2009 11:18:10 GMT
 
 X-Squid-Error: ERR_INVALID_REQ 0
 
 X-Cache: MISS from bijayant.kavach.blr
 
 X-Cache-Lookup: NONE from bijayant.kavach.blr:3128
 
 Via: 1.0 bijayant.kavach.blr (squid/3.0.STABLE10)
 
 Proxy-Connection: close
 
 Please suggest me what could be the reason and how to resolve this. Any 
 help/pointer can be a very helpful for me. 
 
 
 Bijayant Kumar
 
 
   Get your new Email address!
 Grab the Email name you#39;ve always wanted before someone else does!
 http://mail.promotions.yahoo.com/newdomains/aa/
 



Re: [squid-users] Tuning problem in squid

2009-06-15 Thread Thanigairajan
Hi,
I have done everything which is said by Kinke,
the problem has little bit rectified.
i.e. it is comparatively good but if clients are using new sites(other
than in cache) it is slow.

My squid -v is as follows

innovat...@innovation:~$ squid -v
Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid'
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid'
'--enable-async-io' '--with-pthreads'
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter'
'--enable-arp-acl' '--enable-epoll'
'--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests'
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm' '--enable-carp'
'--enable-follow-x-forwarded-for' '--with-large-files'
'--with-maxfd=65536' 'i386-debian-linux'
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux'
'target_alias=i386-debian-linux' 'CFLAGS=-Wall -g -O2'
'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='



Here i am  pasting my squid.conf file (ecerpt)

http_port 127.0.0.1:3128
http_port 192.168.1.6:3128 transparent
cache_effective_user proxy
cache_effective_group proxy
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
 cache_mem 48 MB
maximum_object_size 8192 KB
fqdncache_size 2048
 cache_dir ufs /var/spool/squid 1000 16 256
access_log /var/log/squid/access.log squid
debug_options ALL,1
 log_fqdn off
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl Safe_ports port 465
acl Safe_ports port 143
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl our_networks src 192.168.1.0/24
acl ceo src 192.168.1.8
acl ceo src 192.168.1.35
acl normal_users src 192.168.1.159 192.168.1.160 192.168.1.161 192.168.1.162
acl filetype url_regex -i .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar
.avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov .mp4 .msi
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 100/6000
delay_access 1 allow  filetype normal_users
http_access  allow  our_networks
http_access allow ceo
http_access allow normal_users
http_access deny !normal_users
http_access deny normal_users bannedsites
http_access allow localhost
http_access allow ceo
http_access allow our_networks
http_access deny all
http_reply_access allow all
 cache_effective_user proxy
cache_effective_group proxy
visible_hostname innovation
cache_mgr   tech_supp...@sybrant.com
coredump_dir /var/spool/squid
redirector_bypass on
redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
redirect_children 10
 pipeline_prefetch on


On Fri, Jun 12, 2009 at 8:00 PM, Kinkie gkin...@gmail.com wrote:

 On Fri, Jun 12, 2009 at 4:01 PM, Thanigairajanmethani...@gmail.com wrote:
  Hi ,
 
  I am facing some performance issues in squid .
 
  i.e. I have Debian etch with squid,squidguard,shorewall.
  Internet is  working in normal speed if clients are approx 50 .
  If clients are approx 70 -100 it is getting very slow.
 
  I googled for tuning and done the following things,
  redirect_children 10
  cache_dir ufs /var/spool/squid 1000 16 256

 ufs is definitely not suited for anything but testing. Please try aufs 
 instead.

  cache_mem 48 MB

 48Mb of cache_mem on a 4gb server? This could definitely be raised.

  pipeline_prefetch on
  fqdncache_size 2048
  maximum_object_size 8192 KB
 
  Can you please suggest me how can i improve  much ?
 
  FYI : We have Leased line .so we are getting constant bandwidth.
  We are running the server in desktop HP Compaq with 4GB RAM, Core2Duo

 Unless your issues can be solved by these simple hints, we need to
 have more informations, such as the output from squid -v and a more
 complete configuration excerpt.

 --
    /kinkie



--
Thanks  Regards
MThanigairajan

The Most Certain Way To Succeed Is To Try One More Time

         -- By Edison


Re: [squid-users] Load Balancing Query

2009-06-15 Thread Amos Jeffries

Mario Remy Almeida wrote:

Hi Amos,

Thanks for that,

so I need to use carp and sourcehash to do load balancing, right?


only the one you want.



but where do I specify in squid to monitor the prots?

I mean if port 8080 is down on 'ServerA' how Squid will know that it
should send the request to 'ServerB' on port 8080?


It's automatic in the background.

The latest 2.HEAD and 3.1 have options to configure how long it takes to 
detect. Other squid attempt ~10 connects and then failover.


Amos



//Remy

On Mon, 2009-06-15 at 23:05 +1200, Amos Jeffries wrote:

Mario Remy Almeida wrote:

Hi All,

Want to know if load balancing is possible with squid by maintaining
sessions.
Health check should be TCP Ports

eg:
Server A - Active port 8080
Server B - Active port 8080

Client - Squid - Server A and/or B

Request 1 comes from 'Client A' Squid forwards the request to 'Server A'
Request 2 comes from 'Client A' Squid forwards the request to 'Server A'
and so on
any further request from 'Client A' squid should only forward to 'Server
A' until the session is same

if

Request 1 comes from 'Client B' Squid forwards the request to 'Server B'
Request 2 comes from 'Client B' Squid forwards the request to 'Server B'

if 'Server A' fails Squid should forward all the request to 'Server B'

//Remy



HTTP is stateless. It contains no such thing as sessions. That is a 
browser feature.


What you are looking for is something like CARP or sourcehash peering 
algorithms. They keep all requests for certain URLs sent to the same 
place (CARP) or all requests for the same IP to the same place (sourcehash).


see
http://www.squid-cache.org/Doc/config/cache_peer


Amos





--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


Re: [squid-users] NONE/411 Length Required

2009-06-15 Thread Amos Jeffries

Bijayant Kumar wrote:

Hello list,

I have Squid version 3.0.STABLE 10 installed on Gentoo linux box. All things 
are working fine, means caching proxying etc. There is a problem with some 
sites. When I am accessing one of those sites, in access.log I am getting

NONE/411 3692 POST 
http://.justdial.com/autosuggest_category_query_main.php? - NONE/- text/html

And on the webpage I am getting whole error page of squid. Actually its a search related 
page. In the search criteria field as soon as I am typing after two words I am getting 
this error. The website in a question is http://justdial.com;. But it works 
without the Squid.


I tried to capture the http headers also which are as below

http://.justdial.com/autosuggest_category_query_main.php?city=Bangaloresearch=Ka



POST /autosuggest_category_query_main.php?city=Bangaloresearch=Ka HTTP/1.1

Host: .justdial.com

User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.16) Gecko/20080807 
Firefox/2.0.0.16

Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

Accept-Language: en-us,en;q=0.7,hi;q=0.3

Accept-Encoding: gzip,deflate

Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive: 300

Connection: keep-alive

Referer: http://.justdial.com/

Cookie: PHPSESSID=d1d12004187d4bf1f084a1252ec46cef; 
__utma=79653650.2087995718.1245064656.1245064656.1245064656.1; __utmb=79653650; 
__utmc=79653650; 
__utmz=79653650.1245064656.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); 
CITY=Bangalore

Pragma: no-cache

Cache-Control: no-cache



HTTP/1.x 411 Length Required

Server: squid/3.0.STABLE10

Mime-Version: 1.0

Date: Mon, 15 Jun 2009 11:18:10 GMT

Content-Type: text/html

Content-Length: 3287

Expires: Mon, 15 Jun 2009 11:18:10 GMT

X-Squid-Error: ERR_INVALID_REQ 0

X-Cache: MISS from bijayant.kavach.blr

X-Cache-Lookup: NONE from bijayant.kavach.blr:3128

Via: 1.0 bijayant.kavach.blr (squid/3.0.STABLE10)

Proxy-Connection: close

Please suggest me what could be the reason and how to resolve this. Any help/pointer can be a very helpful for me. 



Bijayant Kumar


  Get your new Email address!
Grab the Email name you#39;ve always wanted before someone else does!
http://mail.promotions.yahoo.com/newdomains/aa/



NONE - no upstream source.
411  - Content-Length missing

HTTP requires a Content-Length: header on POST requests.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread abdul sami
Thanks to all for replies.

Sorry i didn't mentioned the plateform I am using to run squid on
which is freebsd 7.

I have visited the linux-ha site, where it says the software is
supported for freebsd too but their is no distribution for freebsd, so
can u people tell me which distribution i can use for feebsd 7?

Thanks  Regards,
A Sami

On Mon, Jun 15, 2009 at 4:07 PM, Muhammad
Sharfuddinm.sharfud...@nds.com.pk wrote:
 just a question

2. Use an HA solution such as Ultramonkey3. Here you could do
Active-Active.
 Why Ultramonkey3.. why not HA from http://www.linux-ha.org/

 -Sharfuddin

 A PC is like a aircondition. If you open Windows it just don't funktion
 properly anymore

 On Mon, 2009-06-15 at 12:12 +0200, Luis Daniel Lucio Quiroz wrote:
 There are 2 ways as far as I know to do this possible:

 1. Use de WPAD protocol: lets say PROXY squid1; PROXY squid2 (this is fail
 over)
 2. Use an HA solution such as Ultramonkey3. Here you could do Active-Active.

 Kind regards,

 LD
 Le lundi 15 juin 2009 11:09:28, Sagar Navalkar a écrit :
  Hey Remy,
 
  The DNS server does not determine which server is down, however If It is
  unable to resolve the 1st entry, it will automatically go down to the 2nd
  entry.
 
  Regards,
 
  Sagar Navalkar
  Team Leader
 
 
  -Original Message-
  From: Mario Remy Almeida [mailto:malme...@isaaviation.ae]
  Sent: Monday, June 15, 2009 1:36 PM
  To: Sagar Navalkar
  Cc: squid-users@squid-cache.org; 'abdul sami'
  Subject: RE: [squid-users] How to setup squid proxy to run in fail-over
  mode
 
  Hi Sagar,
 
  Just a Question?
 
  How can a DNS server determine that the primary server is down and it
  should resolve the secondary server IP?
 
  //Remy
 
  On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
   Hi Abdul,
  
   Please try to enter 2 different IPs in the DNS 
  
   10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both
   the servers.)
   10.xxx.yyy.zz2 (proxyA) as secondary.
  
   Start squid services on both the servers (Primary  Secondary)
  
   If Primary server fails, the DNS will resolve secondary IP for proxyA 
 
  the
 
   squid on second server will kick in automatically..
  
   Hope am able to explain it properly.
  
   Regards,
  
   Sagar Navalkar
  
  
   -Original Message-
   From: abdul sami [mailto:sami.me...@gmail.com]
   Sent: Monday, June 15, 2009 11:17 AM
   To: squid-users@squid-cache.org
   Subject: [squid-users] How to setup squid proxy to run in fail-over mode
  
   Dear all,
  
   Now that i have setup a proxy server, as a next step i want to run it
   in fail-over high availability mode, so that if one proxy is down due
   to any reason, second proxy should automatically be up and start
   serving requests.
  
   any help in shape of articles/steps would be highly appreciated.
  
   Thanks and regards,
  
   A Sami
 
  ---
 - --
  Disclaimer and Confidentiality
 
 
  This material has been checked for  computer viruses and although none has
  been found, we cannot guarantee  that it is completely free from such
  problems
  and do not accept any  liability for loss or damage which may be caused.
  Please therefore  check any attachments for viruses before using them on
  your
  own  equipment. If you do find a computer virus please inform us
  immediately so that we may take appropriate action. This communication is
  intended solely
  for the addressee and is confidential. If you are not the intended
  recipient,
  any disclosure, copying, distribution or any action  taken or omitted to be
  taken in reliance on it, is prohibited and may be  unlawful. The views
  expressed in this message are those of the  individual sender, and may not
  necessarily be that of ISA.





[squid-users] Https redirect?

2009-06-15 Thread Chris Williams

Hi,
I'm running Squid Beta 3.1 in forward proxy mode. I'd like to redirect  
certain domains to my own hosted page using an external redirector  
script.


This work fine for http traffic, but with https my browser tells me  
that the proxy server has refused my connection.


Is there any way for me to get my own page displayed here?

Thanks,
Chris


Re: [squid-users] NONE/411 Length Required

2009-06-15 Thread Bijayant Kumar


--- On Mon, 15/6/09, Amos Jeffries squ...@treenet.co.nz wrote:

 From: Amos Jeffries squ...@treenet.co.nz
 Subject: Re: [squid-users] NONE/411 Length Required
 To: Bijayant Kumar bijayan...@yahoo.com
 Cc: squid users squid-users@squid-cache.org
 Date: Monday, 15 June, 2009, 6:06 PM
 Bijayant Kumar wrote:
  Hello list,
  
  I have Squid version 3.0.STABLE 10 installed on Gentoo
 linux box. All things are working fine, means caching
 proxying etc. There is a problem with some sites. When I am
 accessing one of those sites, in access.log I am getting
  
  NONE/411 3692 POST 
  http://.justdial.com/autosuggest_category_query_main.php?
 - NONE/- text/html
  
  And on the webpage I am getting whole error page of
 squid. Actually its a search related page. In the search
 criteria field as soon as I am typing after two words I am
 getting this error. The website in a question is http://justdial.com;. But 
 it works without the Squid.
  
  
  I tried to capture the http headers also which are as
 below
  
  http://.justdial.com/autosuggest_category_query_main.php?city=Bangaloresearch=Ka
  
  
  
  POST
 /autosuggest_category_query_main.php?city=Bangaloresearch=Ka
 HTTP/1.1
  
  Host: .justdial.com
  
  User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
 rv:1.8.1.16) Gecko/20080807 Firefox/2.0.0.16
  
  Accept:
 text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
  
  Accept-Language: en-us,en;q=0.7,hi;q=0.3
  
  Accept-Encoding: gzip,deflate
  
  Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
  
  Keep-Alive: 300
  
  Connection: keep-alive
  
  Referer: http://.justdial.com/
  
  Cookie: PHPSESSID=d1d12004187d4bf1f084a1252ec46cef;
 __utma=79653650.2087995718.1245064656.1245064656.1245064656.1;
 __utmb=79653650; __utmc=79653650;
 __utmz=79653650.1245064656.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none);
 CITY=Bangalore
  
  Pragma: no-cache
  
  Cache-Control: no-cache
  
  
  
  HTTP/1.x 411 Length Required
  
  Server: squid/3.0.STABLE10
  
  Mime-Version: 1.0
  
  Date: Mon, 15 Jun 2009 11:18:10 GMT
  
  Content-Type: text/html
  
  Content-Length: 3287
  
  Expires: Mon, 15 Jun 2009 11:18:10 GMT
  
  X-Squid-Error: ERR_INVALID_REQ 0
  
  X-Cache: MISS from bijayant.kavach.blr
  
  X-Cache-Lookup: NONE from bijayant.kavach.blr:3128
  
  Via: 1.0 bijayant.kavach.blr (squid/3.0.STABLE10)
  
  Proxy-Connection: close
  
  Please suggest me what could be the reason and how to
 resolve this. Any help/pointer can be a very helpful for me.
 
  
  Bijayant Kumar
  
  
Get your new Email
 address!
  Grab the Email name you've always wanted before
 someone else does!
  http://mail.promotions.yahoo.com/newdomains/aa/
 
 
 NONE - no upstream source.
 411  - Content-Length missing
 
 HTTP requires a Content-Length: header on POST requests.
 

How to resolve this issue. Because the website is on internet and its working 
fine without the squid. When I am bypassing the proxy, I am not getting any 
type of error.

Can't this website be accessed through the Squid?

 Amos
 -- Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
   Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1
 


  New Email addresses available on Yahoo!
Get the Email name you#39;ve always wanted on the new @ymail and @rocketmail. 
Hurry before someone else does!
http://mail.promotions.yahoo.com/newdomains/aa/


Re: [squid-users] Tuning problem in squid

2009-06-15 Thread Amos Jeffries

Thanigairajan wrote:

Hi,
I have done everything which is said by Kinke,
the problem has little bit rectified.
i.e. it is comparatively good but if clients are using new sites(other
than in cache) it is slow.

My squid -v is as follows

innovat...@innovation:~$ squid -v
Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid'
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid'
'--enable-async-io' '--with-pthreads'
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter'
'--enable-arp-acl' '--enable-epoll'
'--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests'
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm' '--enable-carp'
'--enable-follow-x-forwarded-for' '--with-large-files'
'--with-maxfd=65536' 'i386-debian-linux'
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux'
'target_alias=i386-debian-linux' 'CFLAGS=-Wall -g -O2'
'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='




Your Squid is kind of aging. We are now up to 2.7.STABLE6 or 3.0.STABLE16.
Even the Debian stable release is up to 2.7.STABLE3 or 3.0.STABLE8 already.





Here i am  pasting my squid.conf file (ecerpt)

http_port 127.0.0.1:3128
http_port 192.168.1.6:3128 transparent
cache_effective_user proxy
cache_effective_group proxy
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
 cache_mem 48 MB


#1: low amount of memory available for recently hit of often-hit objects.


maximum_object_size 8192 KB
fqdncache_size 2048
 cache_dir ufs /var/spool/squid 1000 16 256


#2: ufs filesystem. You appear to have Linux therefore use AUFS.

#3:  1000 MB allocated for entire cache storage. Increase this to raise 
local hits and thus speed.



access_log /var/log/squid/access.log squid
debug_options ALL,1
 log_fqdn off
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl Safe_ports port 465
acl Safe_ports port 143
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl our_networks src 192.168.1.0/24
acl ceo src 192.168.1.8
acl ceo src 192.168.1.35
acl normal_users src 192.168.1.159 192.168.1.160 192.168.1.161 192.168.1.162
acl filetype url_regex -i .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar
.avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov .mp4 .msi
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 100/6000
delay_access 1 allow  filetype normal_users


H, as slow as 100 bytes per second perhapse?

That filetype regex will catch most requests.

* Remember that for regex '.' means any character

* Listing a pattern without anchors means it matches anywhere.

* The Squid url_regex pattern matches the entire URL: 
protocol,domain,port,path,query-string all of it.




http_access  allow  our_networks


NP: Entire network allowed to access the net, before special ranges...

These ...

http_access allow ceo
http_access allow normal_users
http_access deny !normal_users
http_access deny normal_users bannedsites
http_access allow localhost
http_access allow ceo
http_access allow our_networks


... to here will never match.


http_access deny all
http_reply_access allow all
 cache_effective_user proxy
cache_effective_group proxy
visible_hostname innovation
cache_mgr   tech_supp...@sybrant.com
coredump_dir /var/spool/squid
redirector_bypass on
redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
redirect_children 10
 pipeline_prefetch on



Besides the cache_dir and delay_pools issues. I think its likely to be a 
squidGuard issue. Simply calling and waiting for a redirector can slow 
things down noticeably under load.


I'd also check the squidGuard rules are handled fast.

squidclient mgr:redirector  looks for avg service times




On Fri, Jun 12, 2009 at 8:00 PM, Kinkie gkin...@gmail.com wrote:

On Fri, Jun 12, 2009 at 4:01 PM, Thanigairajanmethani...@gmail.com wrote:

Hi ,

I am facing some performance issues in squid .

i.e. I have Debian etch with squid,squidguard,shorewall.
Internet is  working in normal speed if clients are approx 50 .
If clients are approx 70 -100 it is getting very slow.

I googled for tuning and done the following things,
redirect_children 10
cache_dir ufs /var/spool/squid 1000 16 256

ufs is definitely not suited for anything but testing. Please try aufs instead.


cache_mem 48 MB

48Mb of 

Re: [squid-users] Load Balancing Query

2009-06-15 Thread Mario Remy Almeida
Thanks Amos for the help



On Tue, 2009-06-16 at 00:30 +1200, Amos Jeffries wrote:
 Mario Remy Almeida wrote:
  Hi Amos,
  
  Thanks for that,
  
  so I need to use carp and sourcehash to do load balancing, right?
 
 only the one you want.
 
  
  but where do I specify in squid to monitor the prots?
  
  I mean if port 8080 is down on 'ServerA' how Squid will know that it
  should send the request to 'ServerB' on port 8080?
 
 It's automatic in the background.
 
 The latest 2.HEAD and 3.1 have options to configure how long it takes to 
 detect. Other squid attempt ~10 connects and then failover.
 
 Amos
 
  
  //Remy
  
  On Mon, 2009-06-15 at 23:05 +1200, Amos Jeffries wrote:
  Mario Remy Almeida wrote:
  Hi All,
 
  Want to know if load balancing is possible with squid by maintaining
  sessions.
  Health check should be TCP Ports
 
  eg:
  Server A - Active port 8080
  Server B - Active port 8080
 
  Client - Squid - Server A and/or B
 
  Request 1 comes from 'Client A' Squid forwards the request to 'Server A'
  Request 2 comes from 'Client A' Squid forwards the request to 'Server A'
  and so on
  any further request from 'Client A' squid should only forward to 'Server
  A' until the session is same
 
  if
 
  Request 1 comes from 'Client B' Squid forwards the request to 'Server B'
  Request 2 comes from 'Client B' Squid forwards the request to 'Server B'
 
  if 'Server A' fails Squid should forward all the request to 'Server B'
 
  //Remy
 
 
  HTTP is stateless. It contains no such thing as sessions. That is a 
  browser feature.
 
  What you are looking for is something like CARP or sourcehash peering 
  algorithms. They keep all requests for certain URLs sent to the same 
  place (CARP) or all requests for the same IP to the same place 
  (sourcehash).
 
  see
  http://www.squid-cache.org/Doc/config/cache_peer
 
 
  Amos
  
  



--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


Re: [squid-users] squid 2.7 / 3.0 : delay pools

2009-06-15 Thread Fabien Seisen
2009/6/6 Amos Jeffries squ...@treenet.co.nz:
 It may also require some trolling through a debug_options ALL,9 if you
 are
 able to generate one from your test case.

 debug_options speaks a *lot*  and i do not know what to look for ...
 any hints for me ? :)


 Bit of a stab in the dark there for me too. Thus the full trace.

 AFAIK pools works by slowing the write. But reads are open still and stuff
 like ICAP and ClientStreams may pull the whole file in.

 I'm suspecting something is accounting for pooled data but not checking
 before sending a whole pile down the way.

i tried a diff mais squid switched from C to CPP :/

-- 
Fabien


Re: [squid-users] Https redirect?

2009-06-15 Thread Amos Jeffries

Chris Williams wrote:

Hi,
I'm running Squid Beta 3.1 in forward proxy mode. I'd like to redirect 
certain domains to my own hosted page using an external redirector script.


This work fine for http traffic, but with https my browser tells me that 
the proxy server has refused my connection.


Is there any way for me to get my own page displayed here?


Depends on what error page you are seeing. If it is actually a browser 
generated one, then its a browser setting to change (if even possible).


If it's a Squid one then find the ACL doing denial and alter the 
appropriate response with deny_info.


With HTTPS its tricky because Squid does not naturally see the URL. What 
it gets is a hostname and port to create a tunnel for client to shove 
encrypted data down. The request header URL etc are inside that 
encrypted portion.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


[squid-users] Squid on DMZ

2009-06-15 Thread João Kuchnier
Hi everyone!

Today I'm running squid on firewall and it is very easy to manage.
Despite of that, we are trying to decentralize services and adding new
virtual machines on DMZ for each of the servers we need.

I would like to know if you recommend to install Squid on DMZ, if it
is use to manage and how I could manage rules on firewall (we use
shorewall).

Best regards,

João K.


RE: [squid-users] Gzip

2009-06-15 Thread ADEBAYO, FOLUSO, ATTSI
Thanks Kinkie, for the response, can you tell me exactly why it cant be done?

-Original Message-
From: Kinkie [mailto:gkin...@gmail.com] 
Sent: Friday, June 12, 2009 10:31 AM
To: ADEBAYO, FOLUSO, ATTSI
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Gzip

On Fri, Jun 12, 2009 at 4:23 PM, ADEBAYO, FOLUSO, ATTSIfa6...@att.com wrote:
 Hi All,
    Does anyone know of a way to implement gzip in Squid 2.6? I am new
 to Squid and need to have this completed ASAP.

Can't be done. The best option is squid 3.1 with the accompanying eCAP
GZIP module, or MAYBE some ICAP service (but I don't know of any such
service).


-- 
/kinkie


[squid-users] Blocking mime application/x-sh also blocks mime application/x-shockwave-flash

2009-06-15 Thread Ronie Gilberto Henrich
Hi,

When block mime type application/x-sh using http_reply_access deny,
it is blocking mime type application/x-shockwave-flash too.
Could it be a bug with Squid?

I am using Squid version 3.0.14-r2, amd64.


Thanks and regards,
Ronie Henrich



Re: [squid-users] Gzip

2009-06-15 Thread Chris Woodfield
There is no code in squid to transform content inside the cache  
beyond headers. The development path for content transformation (of  
which gzip compression is one of many potential examples of) is via  
ICAP services (3.0 and above) and ECAP plugins (3.1).


That said, squid is 100% open source, so feel free to adapt the code  
to your needs if you have the time and expertise.


-C

On Jun 15, 2009, at 11:34 AM, ADEBAYO, FOLUSO, ATTSI wrote:

Thanks Kinkie, for the response, can you tell me exactly why it cant  
be done?


-Original Message-
From: Kinkie [mailto:gkin...@gmail.com]
Sent: Friday, June 12, 2009 10:31 AM
To: ADEBAYO, FOLUSO, ATTSI
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Gzip

On Fri, Jun 12, 2009 at 4:23 PM, ADEBAYO, FOLUSO,  
ATTSIfa6...@att.com wrote:

Hi All,
  Does anyone know of a way to implement gzip in Squid 2.6? I am new
to Squid and need to have this completed ASAP.


Can't be done. The best option is squid 3.1 with the accompanying eCAP
GZIP module, or MAYBE some ICAP service (but I don't know of any such
service).


--
  /kinkie





[squid-users] tcp_outgoing TOS

2009-06-15 Thread Evelio Vila
hi list,

I've been using tcp_outgoing_tos for a time now and i would like to make
users (login) and url lookups against data store in a sql database.
Im using something like:

acl top_users proxy_auth /etc/users.top
acl top_url url_regex /etc/top_url

tcp_outgoing_tos 0x10
top_users 
tcp_outgoing_tos 0x04 top_url


Is there a way to accomplish this?

I've read that currently this feature doesn't support external acl
lookups, is this true?


regards,
evelio vila


VI Conferencia Internacional de Energía Renovable, Ahorro de Energía y 
Educación Energética
9 - 12 de Junio 2009, Palacio de las Convenciones
...Por una cultura energética sustentable
www.ciercuba.com 


[squid-users] 3rd email for RPC Over HTTPS issue

2009-06-15 Thread Mario Remy Almeida
Hi All,

This is my 3rd email for the below mentioned problem.
I am writing this email in the hope that someone will reply and say if
it can be done or not. Just yes or no will do for me so that I know it
is possible or not.

Successfully configure reverse proxy HTTPS but proxy with RPC Over HTTPS

Squid 2.7STABLE6
Windows 2008
Exchange 2007

Having issue with RPC over HTTPS, below is the error message

Attempting to ping RPC Endpoint 6001 (Exchange Information Store) on
server hubsexchange.airarabiauae.com  Failed to ping Endpoint 
Additional Details   An RPC Error was thrown by the RPC Runtime. Error
1818 1818

Please let me know what could be the problem, some hint.

//Remy


--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


[squid-users] extracting icp_query_timeout info?

2009-06-15 Thread Ross J. Reedstrom
Hey all -
I'm using squid 2.7.6 in a reverse-proxy web-accelerator config,
fronting a Zope-based app server, via cache_peer and Zope's ICP
functionality for weighting the peers. I've been delving into the
peer selection algorithms, and was wondering if there's some way to
extract squid's idea of the current dynamic value of the
icp_query_timeout?  I get a proxy of it, based on hierarchy codes w/
'TIMEOUT_' and 'IGNORED' in the server_list stats. Anything more
explicit?

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
The Connexions Project  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


[squid-users] Access control : How to block a very large number of domains

2009-06-15 Thread hims92

Hi,
As far as I know, SquidGuard uses Berkeley DB (which is based on BTree and
Hash tables) for storing the urls and domains to be blocked. But I need to
store a huge amount of domains (about 7 millions) which are to be blocked.
Moreover, the search time to check if the domain is there in the block list,
has to be less than a microsecond.

So, Will Berkeley DB serve the purpose?

I can search for a domain using PATRICIA Trie in less than 0.1 microseconds.
So, if Berkeley Trie is not good enough, how can I use the Patricia Trie
instead of Berkeley DB in Squid to block the url.


-- 
View this message in context: 
http://www.nabble.com/Access-control-%3A-How-to-block-a-very-large-number-of-domains-tp24041263p24041263.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Blocking mime application/x-sh also blocks mime application/x-shockwave-flash

2009-06-15 Thread Chris Robertson

Ronie Gilberto Henrich wrote:

Hi,

When block mime type application/x-sh using http_reply_access deny,
it is blocking mime type application/x-shockwave-flash too.
Could it be a bug with Squid?
  


It could be a Squid bug, but I would be more apt to blame an improperly 
formatted regular expression.



I am using Squid version 3.0.14-r2, amd64.


Thanks and regards,
Ronie Henrich
  


Chris


Re: [squid-users] 3rd email for RPC Over HTTPS issue

2009-06-15 Thread Chris Robertson

Mario Remy Almeida wrote:

Hi All,

This is my 3rd email for the below mentioned problem.
I am writing this email in the hope that someone will reply and say if
it can be done or not. Just yes or no will do for me so that I know it
is possible or not.

Successfully configure reverse proxy HTTPS but proxy with RPC Over HTTPS

Squid 2.7STABLE6
Windows 2008
Exchange 2007

Having issue with RPC over HTTPS, below is the error message

Attempting to ping RPC Endpoint 6001 (Exchange Information Store) on
server hubsexchange.airarabiauae.com  Failed to ping Endpoint 
Additional Details   An RPC Error was thrown by the RPC Runtime. Error

1818 1818

Please let me know what could be the problem, some hint.
  


Having no knowledge of accelerating Exchange, the only help I can 
provide is a link to the Configuration Examples page on the Wiki:


http://wiki.squid-cache.org/ConfigExamples/Reverse/ExchangeRpc

Chris


//Remy




[squid-users] Bypassing squid for certain sites

2009-06-15 Thread Jamie Orzechowski
I am having issues with a few sites like megavideo, hotmail, etc and
looking to bypass them entirely via IPTables ... I have added some
rules to IPTables but I still see the traffic hitting the caches.  Any
ideas?

Strange thing is that when running an iptables --list it shows no
rules configured at all ..

Here is my iptables rules

/usr/local/sbin/iptables -t mangle -N DIVERT
/usr/local/sbin/iptables -t mangle -A DIVERT -j MARK --set-mark 1
/usr/local/sbin/iptables -t mangle -A DIVERT -j ACCEPT
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT

#Bypass These subnets
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 65.54.186.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 65.54.165.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 72.32.79.195/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 64.4.20.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 69.5.88.0/24 -j RETURN

# Redirect to squid
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp --dport 80 -j
TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100


Re: [squid-users] Blocking mime application/x-sh also blocks mime application/x-shockwave-flash

2009-06-15 Thread Ronie Gilberto Henrich
Hi Chris,

It is no regular expression in this case (rep_mime_type):
/etc/squid/squid.conf
...
acl deny_file_mime_rep   rep_mime_type /etc/squid/denied_file_mime

http_reply_access deny all deny_file_mime_rep
...

/etc/squid/denied_file_mime
application/x-sh


Any ideas?


Thanks and regards,
Ronie


 Original Message  
Subject: Re: [squid-users] Blocking mime application/x-sh also
blocks mime application/x-shockwave-flash
From: Chris Robertson crobert...@gci.net
To: squid-users@squid-cache.org
Date: Mon Jun 15 2009 15:58:11 GMT-0400 (Eastern Daylight Time)

 Ronie Gilberto Henrich wrote:
 Hi,
 
 When block mime type application/x-sh using http_reply_access
 deny, it is blocking mime type application/x-shockwave-flash
 too. Could it be a bug with Squid?
 
 
 It could be a Squid bug, but I would be more apt to blame an
 improperly formatted regular expression.
 
 I am using Squid version 3.0.14-r2, amd64.
 
 
 Thanks and regards, Ronie Henrich
 
 
 Chris


Re: [squid-users] extracting icp_query_timeout info?

2009-06-15 Thread Chris Robertson

Ross J. Reedstrom wrote:

Hey all -
I'm using squid 2.7.6 in a reverse-proxy web-accelerator config,
fronting a Zope-based app server, via cache_peer and Zope's ICP
functionality for weighting the peers. I've been delving into the
peer selection algorithms, and was wondering if there's some way to
extract squid's idea of the current dynamic value of the
icp_query_timeout?  I get a proxy of it, based on hierarchy codes w/
'TIMEOUT_' and 'IGNORED' in the server_list stats. Anything more
explicit?
  


Using squidclient mgr:server_list there is a AVG RTT value.  In 
mgr:digest_stats there is icp.query_median_svc_time.



Ross
  


Chris


Re: [squid-users] authication retries

2009-06-15 Thread Al - Image Hosting Services

Hi,

On Mon, 15 Jun 2009, Amos Jeffries wrote:

On Sun, 14 Jun 2009 20:28:28 -0500 (CDT), Al - Image Hosting Services
az...@zickswebventures.com wrote:

Hi,

After thinking about it, I decided that if a person lost their password,
that I should have away for them to retrieve it without needing me, so I
added an acl to unblock a site so it would work without authentication.
Where I have a problem is that it looks like you can try wrong usernames
and passwords all day. Could someone tell me how many times a user will

be

able to type in their username and password before squid will give the
ERR_CACHE_ACCESS_DENIED page? Or if there is even a way to change this
number. I would like people to see the error page after maybe 10 tries.

If

this can't be changed, then I will need to find another way to deal with
this issue.

Best Regards,
Al


Zero times. It is displayed immediately when auth credentials are missing
or bad.

The problem you have now is that the error page is hidden by the browsers
and converted into that popup everyone is so familiar with.


I must admit that I really expected to get this answer, but I need to be 
sure. Do you know if there is any kind of work around?


Thanks,
Al


Re: [squid-users] extracting icp_query_timeout info?

2009-06-15 Thread Ross J. Reedstrom
On Mon, Jun 15, 2009 at 12:16:27PM -0800, Chris Robertson wrote:
 
 Using squidclient mgr:server_list there is a AVG RTT value.  In 
 mgr:digest_stats there is icp.query_median_svc_time.

Which are both useful values. However, they don't tell me what value the
server is using for the dynamic timeout. AFAICT, it's being kept on a
per-server basis. The mystery is that I've got a load balancing setup,
and squid seems to be favoring servers that server_list claim an AVG RTT
in the 200-500 ms range, when other, less favored servers are showing
5-20 ms.  Walking the code now (which is much easier once I realized my
editor's tabstop was set for the python-default 4 spaces, not c-friendly
8. Oops) Anyone got pointers to any sort of higher-level design for any
of this? I've spent significant time poking around the wiki and FAQ, and
haven't surfaced much.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
The Connexions Project  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE



Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread George Herbert
Most of the suggestions so far have missed the mark.

Squid - like an Apache web server etc - is essentially stateless
(transactions in progress don't make permanent changes).  You can run
any number of web servers or Squid servers in parallel with requests
being freely responded to by any of them.  If you set them up as a
cache peering group, the cache hit rate issues with multiple separate
servers are significantly reduced.

High Availability for servers that can run in parallel in this manner
is almost always done by putting some sort of load balancer out in
front, not using clustering software to fail over a service between
two nodes.

HA software makes little sense in this case.

There are various free HTTP load balancer software solutions out there
which are open source, or you can buy a commercial load balancer if
you have higher bandwidth requirements.  Most of those applications
can cluster, giving you load balancer level HA.

Multiple DNS A records doesn't necessarily work - many clients will
try the first A record result they get, and if they get no response
assume the server is down.  If you know that all the client software
behind your squids are properly able to try second or third A records,
then that's safe - but test it first.

One can use Linux HA or another clustering solution to create a
virtual IP address that can move around, server to server, so you
don't need a load balancer and if server A goes down the IP will go to
server B.  But it's a very poor match to the application.


-george william herbert
george.herb...@gmail.com


On Mon, Jun 15, 2009 at 5:43 AM, abdul samisami.me...@gmail.com wrote:
 Thanks to all for replies.

 Sorry i didn't mentioned the plateform I am using to run squid on
 which is freebsd 7.

 I have visited the linux-ha site, where it says the software is
 supported for freebsd too but their is no distribution for freebsd, so
 can u people tell me which distribution i can use for feebsd 7?

 Thanks  Regards,
 A Sami

 On Mon, Jun 15, 2009 at 4:07 PM, Muhammad
 Sharfuddinm.sharfud...@nds.com.pk wrote:
 just a question

2. Use an HA solution such as Ultramonkey3. Here you could do
Active-Active.
 Why Ultramonkey3.. why not HA from http://www.linux-ha.org/

 -Sharfuddin

 A PC is like a aircondition. If you open Windows it just don't funktion
 properly anymore

 On Mon, 2009-06-15 at 12:12 +0200, Luis Daniel Lucio Quiroz wrote:
 There are 2 ways as far as I know to do this possible:

 1. Use de WPAD protocol: lets say PROXY squid1; PROXY squid2 (this is fail
 over)
 2. Use an HA solution such as Ultramonkey3. Here you could do Active-Active.

 Kind regards,

 LD
 Le lundi 15 juin 2009 11:09:28, Sagar Navalkar a écrit :
  Hey Remy,
 
  The DNS server does not determine which server is down, however If It is
  unable to resolve the 1st entry, it will automatically go down to the 2nd
  entry.
 
  Regards,
 
  Sagar Navalkar
  Team Leader
 
 
  -Original Message-
  From: Mario Remy Almeida [mailto:malme...@isaaviation.ae]
  Sent: Monday, June 15, 2009 1:36 PM
  To: Sagar Navalkar
  Cc: squid-users@squid-cache.org; 'abdul sami'
  Subject: RE: [squid-users] How to setup squid proxy to run in fail-over
  mode
 
  Hi Sagar,
 
  Just a Question?
 
  How can a DNS server determine that the primary server is down and it
  should resolve the secondary server IP?
 
  //Remy
 
  On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
   Hi Abdul,
  
   Please try to enter 2 different IPs in the DNS 
  
   10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both
   the servers.)
   10.xxx.yyy.zz2 (proxyA) as secondary.
  
   Start squid services on both the servers (Primary  Secondary)
  
   If Primary server fails, the DNS will resolve secondary IP for proxyA 
 
  the
 
   squid on second server will kick in automatically..
  
   Hope am able to explain it properly.
  
   Regards,
  
   Sagar Navalkar
  
  
   -Original Message-
   From: abdul sami [mailto:sami.me...@gmail.com]
   Sent: Monday, June 15, 2009 11:17 AM
   To: squid-users@squid-cache.org
   Subject: [squid-users] How to setup squid proxy to run in fail-over mode
  
   Dear all,
  
   Now that i have setup a proxy server, as a next step i want to run it
   in fail-over high availability mode, so that if one proxy is down due
   to any reason, second proxy should automatically be up and start
   serving requests.
  
   any help in shape of articles/steps would be highly appreciated.
  
   Thanks and regards,
  
   A Sami
 
  ---
 - --
  Disclaimer and Confidentiality
 
 
  This material has been checked for  computer viruses and although none has
  been found, we cannot guarantee  that it is completely free from such
  problems
  and do not accept any  liability for loss or damage which may be caused.
  Please therefore  check any attachments for viruses before using them on
  your
  own  equipment. If you 

Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread K K
 1. Use de WPAD protocol: lets say PROXY squid1; PROXY squid2
 (this is fail over)

IMHO, using PAC (with or without WPAD) is the simplest and most
effective approach to failover, requiring no additional software
beyond a web server to host the PAC file.

With PAC, the browser will automatically switch to the second proxy in
the list if the first stops responding.  All modern graphical browsers
support PAC, and nearly all support WPAD.

The PAC script is very powerful, you can use many, but not all,
Javascript string and numeric functions.  With a little effort you can
have PAC distribute user load across multiple proxy servers, or even
hash the request URL so, for example, all requests for dilbert.com
first go to squid1, to get the most value from cached content.

For more on PAC, see http://wiki.squid-cache.org/Technology/ProxyPac


[squid-users] Tproxy Help // Transparent works fine

2009-06-15 Thread Alexandre DeAraujo
I have a Transparent Proxy setup currently working and not seeing any problems 
while browsing. I am trying to setup squid to show
client's IP instead of proxy server's IP.
How do I go from this setup to implementing tproxy? Any pointers will be highly 
appreciated. 

CentOS release 5.3 (Final)
iptables v1.4.3.2
Squid Cache: Version 3.0.STABLE16
Linux 2.6.29.4-tproxy2 (custom kernel for tproxy)
Cisco 7206VXR WCCPv2

// start of squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl SSL_ports port 8443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 8443# Plesk
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
#http_access deny all
http_access allow all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
coredump_dir /var/spool/squid

http_port 3129

logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt
#emulate_httpd_log on
access_log /var/log/squid/access.log squid
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
debug_options ALL,3

no_cache allow our_networks
cache_dir ufs /var/spool/squid 20 256 256
cache_effective_user squid
cache_swap_high 100%
cache_swap_low 80%
cache_mem 2 GB
maximum_object_size  8192 KB
half_closed_clients on
client_db off

wccp2_router router primary IP on GEthernet
wccp2_rebuild_wait on
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_assignment_method 1
wccp2_service standard 0

forwarded_for on
// end of squid.conf

// start of /etc/rc.d/rc.local
modprobe ip_gre
iptunnel add wccp2 mode gre remote router wccp id IP address local eth0 IP 
address dev eth0
ifconfig wccp2 eth0 IP Address netmask 255.255.255.255 up
echo 0  /proc/sys/net/ipv4/conf/wccp2/rp_filter
# these are the ONLY iptables rules on the system at the moment(to avoid 
issues).
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 
3128 
iptables -t nat -A PREROUTING -i wccp2 -p tcp -j REDIRECT --to-port 3128
// end of rc.local

Thanks,

Alex DeAraujo




RE: [squid-users] Web mail attachments page cannot display

2009-06-15 Thread web
Sorry I explained myself poorly.

All requests still need to go out to the parent proxy (the links all go back to 
the core and dont allow internet access unless going out the parent proxy in 
the core, which has a 100MB connection to the isp).

So from my understanding, i wont be able to use the always_direct allow nocache 
command.  Instead should I use the cache deny nocache line (instead of the 
no_cache deny nocache).

Again, i explained poorly as I still expect all requests to hit the local 
caching appliance, just dont want them to source the content from the cache 
(i.e. get the content from the internet, as the parent cache doesnt cache, just 
authenticates).

I am definitely up for suggestions on what you think i should have the 
cache_mem, maximum_object_size and cache_dir commands? Which i currently have 
set to:
 cache_mem 32 MB
  maximum_object_size 30720 KB
  cache_dir aufs d:/squid/var/cache 6 16 256

The hard drives are all 160GB, with 60GB setup on C for the operating system, 
and programs.  D drive is the remaining 100GB, with the cache and logs folders 
on it.

Each machine has 1GB of ram.

Appreciate the help.  Thanks.


From: Amos Jeffries [squ...@treenet.co.nz]
Sent: Friday, 12 June 2009 11:48 AM
To: web
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Web mail attachments page cannot display

web wrote:
 Hi,  I have 500 squidnt 2.7 stable 5 appliances out at distributed
 offices.  It is being reported to me that when connected to the local
 caching appliance, intermittently they are getting page cannot
 display messages when using webmail and adding attachments.  If they
 point to the upstream (parent) cache, they are not experiencing the
 problem.  What I have tried, is to put the URL for the webmail in the
 nocache.conf file, so it doesn't cache this information, therefore I
 would assume that its going direct (much the same way as if they
 pointed their caching appliance to upstream server).

You assume wrong. no_cache directive is an obsolete spelling of
cache directive.

The only way to make requests go directly to an outside server without
involving Squid is to do it at the browser (explicit settings or
WPAD/PAC file) or the fireawall (interception bypass rules).

Once the request reaches Squid its too late to not handle.

   The upstream
 (core) squid appliance is managed outside our company, so we dont
 have anything to do with it, but it shouldn't matter either as it
 works pointing directly to it.  Does anyone have any suggestions to
 what I could try or what I am doing wrong?  I have pasted the local
 caching appliance config to help with identifying the problem.
 Thanks in advance.


'always_direct' is the directive to make Squid use a direct link to the
outside server instead of one of the cache_peer links.

I'd try setting:
   always_direct allow nocache

Which will cut the proxy hierarchy to one layer and improve the chances
of a successful request.
I've seen this type of thing with a slow link and large uploaded file
(order of MB such as MS office generated files).

Amos


  http_port 8080
  cache_peer proxy. parent 8080 3130 no-query default login=PASS
  hierarchy_stoplist cgi-bin ?
  acl QUERY urlpath_regex cgi-bin \?
  no_cache deny QUERY

change that to cache deny

  cache_mem 32 MB
  maximum_object_size 30720 KB
  cache_dir aufs d:/squid/var/cache 6 16 256

60GB of storage with a 30MB absolute cap on object size...

cap of 32MB worth of objects stored in RAM-cache at any point.

  auth_param digest children 5
  auth_param digest realm Squid proxy-caching web server
  auth_param digest nonce_garbage_interval 5 minutes
  auth_param digest nonce_max_duration 30 minutes
  auth_param digest nonce_max_count 50
  auth_param basic children 5
  auth_param basic realm Squid proxy-caching web server
  auth_param basic credentialsttl 2 hours
  auth_param basic casesensitive off
  refresh_pattern ^ftp:  1440 20% 10080
  refresh_pattern ^gopher: 1440 0% 1440
  refresh_pattern .  0 20% 4320
  acl all src 0.0.0.0/0.0.0.0
  acl manager proto cache_object
  acl localhost src 127.0.0.1/255.255.255.255
  acl to_localhost dst 127.0.0.0/8
  acl SSL_ports port 443 563
  acl Safe_ports port 80  # http
  acl Safe_ports port 21  # ftp
  acl Safe_ports port 443 563 # https, snews
  acl Safe_ports port 70  # gopher
  acl Safe_ports port 210  # wais
  acl Safe_ports port 1025-65535 # unregistered ports
  acl Safe_ports port 280  # http-mgmt
  acl Safe_ports port 488  # gss-http
  acl Safe_ports port 591  # filemaker
  acl Safe_ports port 777  # multiling http
  acl CONNECT method CONNECT
  acl snmppublic snmp_community xx
  acl snmpprivate snmp_community xx
  http_access allow manager localhost
  http_access deny manager
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  acl block url_regex -i d:/squid/var/logs/block.conf
  acl unblock url_regex -i d:/squid/var/logs/unblock.conf
  acl 

Re: [squid-users] Squid on DMZ

2009-06-15 Thread Amos Jeffries
On Mon, 15 Jun 2009 11:47:46 -0300, João Kuchnier
joao.kuchn...@gmail.com
wrote:
 Hi everyone!
 
 Today I'm running squid on firewall and it is very easy to manage.
 Despite of that, we are trying to decentralize services and adding new
 virtual machines on DMZ for each of the servers we need.
 
 I would like to know if you recommend to install Squid on DMZ, if it
 is use to manage and how I could manage rules on firewall (we use
 shorewall).

I don't have any recommendations either way. The pros and cons balance out
for most intents and purposes. If its working fine for you as-is then there
really isn't anything to fix.

If you do make the move, be aware that with interception the firewall will
need to take into account the squid box IP and make exceptions. Also an
added flow of traffic client-router-squid-router-internet which does
not currently occur on the internal router interface. This effectively
doubles or triples the internal HTTP traffic load on the router.


Amos



RE: [squid-users] Web mail attachments page cannot display

2009-06-15 Thread Amos Jeffries
On Tue, 16 Jun 2009 09:02:42 +0930, web w...@onwestside.com.au wrote:
 Sorry I explained myself poorly.
 
 All requests still need to go out to the parent proxy (the links all go
 back to the core and dont allow internet access unless going out the
parent
 proxy in the core, which has a 100MB connection to the isp).
 
 So from my understanding, i wont be able to use the always_direct allow
 nocache command.  Instead should I use the cache deny nocache line
(instead
 of the no_cache deny nocache).
 
 Again, i explained poorly as I still expect all requests to hit the local
 caching appliance, just dont want them to source the content from the
cache
 (i.e. get the content from the internet, as the parent cache doesnt
cache,
 just authenticates).


Hmm, the usual method of doing this is to store/cache at the local Squid
(layer #2 away from the Internet) and keep the central core proxy (layer #1
away from the Internet) as a simple high-speed pass-thru proxy without any
storage. That reduces load on the central proxy and lets the layers expand
to huge bandwidths (for example, several TB per second over all Squid).


To prevent storage:
  cache deny all
  cache_dir null /tmp


To send all requests to a parent proxy,  never going direct to the
internet:
  never_direct allow all
  always_direct deny all
  prefer_direct off

 
 I am definitely up for suggestions on what you think i should have the
 cache_mem, maximum_object_size and cache_dir commands? Which i currently
 have set to:
 cache_mem 32 MB

At a guess, I'd start with 25% of the free system memory or 15 minutes of
cached HITS...

This is mostly relevant for a storage proxy though.

   maximum_object_size 30720 KB
   cache_dir aufs d:/squid/var/cache 6 16 256
 
 The hard drives are all 160GB, with 60GB setup on C for the operating
 system, and programs.  D drive is the remaining 100GB, with the cache and
 logs folders on it.
 
 Each machine has 1GB of ram.
 
 Appreciate the help.  Thanks.
 
 
 From: Amos Jeffries [squ...@treenet.co.nz]
 Sent: Friday, 12 June 2009 11:48 AM
 To: web
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Web mail attachments page cannot display
 
 web wrote:
 Hi,  I have 500 squidnt 2.7 stable 5 appliances out at distributed
 offices.  It is being reported to me that when connected to the local
 caching appliance, intermittently they are getting page cannot
 display messages when using webmail and adding attachments.  If they
 point to the upstream (parent) cache, they are not experiencing the
 problem.  What I have tried, is to put the URL for the webmail in the
 nocache.conf file, so it doesn't cache this information, therefore I
 would assume that its going direct (much the same way as if they
 pointed their caching appliance to upstream server).
 
 You assume wrong. no_cache directive is an obsolete spelling of
 cache directive.
 
 The only way to make requests go directly to an outside server without
 involving Squid is to do it at the browser (explicit settings or
 WPAD/PAC file) or the fireawall (interception bypass rules).
 
 Once the request reaches Squid its too late to not handle.
 
The upstream
 (core) squid appliance is managed outside our company, so we dont
 have anything to do with it, but it shouldn't matter either as it
 works pointing directly to it.  Does anyone have any suggestions to
 what I could try or what I am doing wrong?  I have pasted the local
 caching appliance config to help with identifying the problem.
 Thanks in advance.

 
 'always_direct' is the directive to make Squid use a direct link to the
 outside server instead of one of the cache_peer links.
 
 I'd try setting:
always_direct allow nocache
 
 Which will cut the proxy hierarchy to one layer and improve the chances
 of a successful request.
 I've seen this type of thing with a slow link and large uploaded file
 (order of MB such as MS office generated files).
 
 Amos
 
 
   http_port 8080
   cache_peer proxy. parent 8080 3130 no-query default login=PASS
   hierarchy_stoplist cgi-bin ?
   acl QUERY urlpath_regex cgi-bin \?
   no_cache deny QUERY
 
 change that to cache deny
 
   cache_mem 32 MB
   maximum_object_size 30720 KB
   cache_dir aufs d:/squid/var/cache 6 16 256
 
 60GB of storage with a 30MB absolute cap on object size...
 
 cap of 32MB worth of objects stored in RAM-cache at any point.
 
   auth_param digest children 5
   auth_param digest realm Squid proxy-caching web server
   auth_param digest nonce_garbage_interval 5 minutes
   auth_param digest nonce_max_duration 30 minutes
   auth_param digest nonce_max_count 50
   auth_param basic children 5
   auth_param basic realm Squid proxy-caching web server
   auth_param basic credentialsttl 2 hours
   auth_param basic casesensitive off
   refresh_pattern ^ftp:  1440 20% 10080
   refresh_pattern ^gopher: 1440 0% 1440
   refresh_pattern .  0 20% 4320
   acl all src 0.0.0.0/0.0.0.0
   acl manager proto 

Re: [squid-users] Blocking mime application/x-sh also blocks mime application/x-shockwave-flash

2009-06-15 Thread Amos Jeffries
On Mon, 15 Jun 2009 16:11:34 -0400, Ronie Gilberto Henrich
ro...@ronie.com.br wrote:
 Hi Chris,
 
 It is no regular expression in this case (rep_mime_type):
 /etc/squid/squid.conf
 ...
 acl deny_file_mime_rep   rep_mime_type /etc/squid/denied_file_mime
 
 http_reply_access deny all deny_file_mime_rep
 ...
 
 /etc/squid/denied_file_mime
 application/x-sh
 
 
 Any ideas?

IIRC the rep_mime_type uses regex to match.

Try this for the mime type:

application/x-sh$

or this:

application/x-sh(;.*)?$

Amos

 
 
 Thanks and regards,
 Ronie
 
 
  Original Message  
 Subject: Re: [squid-users] Blocking mime application/x-sh also
 blocks mime application/x-shockwave-flash
 From: Chris Robertson crobert...@gci.net
 To: squid-users@squid-cache.org
 Date: Mon Jun 15 2009 15:58:11 GMT-0400 (Eastern Daylight Time)
 
 Ronie Gilberto Henrich wrote:
 Hi,
 
 When block mime type application/x-sh using http_reply_access
 deny, it is blocking mime type application/x-shockwave-flash
 too. Could it be a bug with Squid?
 
 
 It could be a Squid bug, but I would be more apt to blame an
 improperly formatted regular expression.
 
 I am using Squid version 3.0.14-r2, amd64.
 
 
 Thanks and regards, Ronie Henrich
 
 
 Chris


Re: [squid-users] Blocking mime application/x-sh also blocks mime application/x-shockwave-flash

2009-06-15 Thread Ronie Gilberto Henrich

Thanks Amos, problem solved!



 Original Message  
Subject: Re: [squid-users] Blocking mime application/x-sh also
blocks mime application/x-shockwave-flash
From: Amos Jeffries squ...@treenet.co.nz
To: ro...@ronie.com.br
Cc: squid-users@squid-cache.org
Date: Mon Jun 15 2009 21:03:08 GMT-0400 (Eastern Daylight Time)

 On Mon, 15 Jun 2009 16:11:34 -0400, Ronie Gilberto Henrich 
 ro...@ronie.com.br wrote:
 Hi Chris,
 
 It is no regular expression in this case (rep_mime_type): 
 /etc/squid/squid.conf ... acl deny_file_mime_rep
 rep_mime_type /etc/squid/denied_file_mime
 
 http_reply_access deny all deny_file_mime_rep ...
 
 /etc/squid/denied_file_mime application/x-sh
 
 
 Any ideas?
 
 IIRC the rep_mime_type uses regex to match.
 
 Try this for the mime type:
 
 application/x-sh$
 
 or this:
 
 application/x-sh(;.*)?$
 
 Amos
 
 
 Thanks and regards, Ronie
 
 
  Original Message   Subject: Re: [squid-users]
 Blocking mime application/x-sh also blocks mime
 application/x-shockwave-flash From: Chris Robertson
 crobert...@gci.net To: squid-users@squid-cache.org Date: Mon
 Jun 15 2009 15:58:11 GMT-0400 (Eastern Daylight Time)
 
 Ronie Gilberto Henrich wrote:
 Hi,
 
 When block mime type application/x-sh using
 http_reply_access deny, it is blocking mime type
 application/x-shockwave-flash too. Could it be a bug with
 Squid?
 
 It could be a Squid bug, but I would be more apt to blame an 
 improperly formatted regular expression.
 
 I am using Squid version 3.0.14-r2, amd64.
 
 
 Thanks and regards, Ronie Henrich
 
 Chris


[squid-users] Bypasing squid for certain sites

2009-06-15 Thread Jamie Orzechowski
I am having issues with a few sites like megavideo, hotmail, etc and
looking to bypass them entirely via IPTables ... I have added some
rules to IPTables but I still see the traffic hitting the caches.  Any
ideas?

Strange thing is that when running an iptables --list it shows no
rules configured at all ..

Here is my iptables rules

/usr/local/sbin/iptables -t mangle -N DIVERT
/usr/local/sbin/iptables -t mangle -A DIVERT -j MARK --set-mark 1
/usr/local/sbin/iptables -t mangle -A DIVERT -j ACCEPT
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT

#Bypass These subnets
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 65.54.186.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 65.54.165.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 72.32.79.195/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 64.4.20.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 69.5.88.0/24 -j RETURN

# Redirect to squid
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp --dport 80 -j
TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100


Re: [squid-users] authication retries

2009-06-15 Thread Amos Jeffries
On Mon, 15 Jun 2009 15:51:54 -0500 (CDT), Al - Image Hosting Services
az...@zickswebventures.com wrote:
 Hi,
 
 On Mon, 15 Jun 2009, Amos Jeffries wrote:
 On Sun, 14 Jun 2009 20:28:28 -0500 (CDT), Al - Image Hosting Services
 az...@zickswebventures.com wrote:
 Hi,

 After thinking about it, I decided that if a person lost their
password,
 that I should have away for them to retrieve it without needing me, so
I
 added an acl to unblock a site so it would work without authentication.
 Where I have a problem is that it looks like you can try wrong
usernames
 and passwords all day. Could someone tell me how many times a user will
 be
 able to type in their username and password before squid will give the
 ERR_CACHE_ACCESS_DENIED page? Or if there is even a way to change this
 number. I would like people to see the error page after maybe 10 tries.
 If
 this can't be changed, then I will need to find another way to deal
with
 this issue.

 Best Regards,
 Al

 Zero times. It is displayed immediately when auth credentials are
missing
 or bad.

 The problem you have now is that the error page is hidden by the
browsers
 and converted into that popup everyone is so familiar with.
 
 I must admit that I really expected to get this answer, but I need to be 
 sure. Do you know if there is any kind of work around?
 
 Thanks,
 Al

Hmm. I'm thinking this is something useful we need to add to Squid. Patches
to Squid-3 welcome if anyone wants wants something to do.

I'm working on theory here so testing and tuning are in order before this
goes live. I'm thinking you may be able to do it by altering the response
headers. It may only work in squid-3 where the headers are available
separately too.

  deny_info http://your.domain.invalid/authpage.html dummy
  reply_header_access deny !auth dummy

Where dummy is an external ACL testing to see how many times the user has
passed bad credentials in a row. You can probably get this by passing %SRC
%{Proxy-Authenticate}


Amos



Re: [squid-users] 3rd email for RPC Over HTTPS issue

2009-06-15 Thread Amos Jeffries
On Mon, 15 Jun 2009 22:44:33 +0400, Mario Remy Almeida
malme...@isaaviation.ae wrote:
 Hi All,
 
 This is my 3rd email for the below mentioned problem.
 I am writing this email in the hope that someone will reply and say if
 it can be done or not. Just yes or no will do for me so that I know it
 is possible or not.
 
 Successfully configure reverse proxy HTTPS but proxy with RPC Over HTTPS
 
 Squid 2.7STABLE6
 Windows 2008
 Exchange 2007
 
 Having issue with RPC over HTTPS, below is the error message
 
 Attempting to ping RPC Endpoint 6001 (Exchange Information Store) on
 server hubsexchange.airarabiauae.com  Failed to ping Endpoint 
 Additional Details   An RPC Error was thrown by the RPC Runtime. Error
 1818 1818
 
 Please let me know what could be the problem, some hint.


Many people are successfully using RPC over Squid.
Configured as per
http://wiki.squid-cache.org/ConfigExamples/Reverse/OutlookWebAccess


Error 1818 appears to be the problem. I cannot help an further sorry.

This is not a squid issue AFAICT. Look for RPC or exchange documentation.
Or even the MS errors information to find out what that means.


Amos


Re: [squid-users] Access control : How to block a very large number of domains

2009-06-15 Thread Amos Jeffries
On Mon, 15 Jun 2009 12:26:16 -0700 (PDT), hims92
himanshu.singh.cs...@itbhu.ac.in wrote:
 Hi,
 As far as I know, SquidGuard uses Berkeley DB (which is based on BTree
and
 Hash tables) for storing the urls and domains to be blocked. But I need
to
 store a huge amount of domains (about 7 millions) which are to be
blocked.
 Moreover, the search time to check if the domain is there in the block
 list,
 has to be less than a microsecond.
 
 So, Will Berkeley DB serve the purpose?
 
 I can search for a domain using PATRICIA Trie in less than 0.1
 microseconds.
 So, if Berkeley Trie is not good enough, how can I use the Patricia Trie
 instead of Berkeley DB in Squid to block the url.

Do do tests with such a critical timing you would be best to use an
internal ACL. Which eliminates networking transfer delays to external
process.

Are you fixed to a certain version of Squid?

Squid-2 is not bad to tweak, but not very easy to add to ACL either.

The Squid-3 ACL are fairly easy to implement and drop a new one in. You can
create your own version of dstdomain and have Squid do the test. At present
dstdomain uses unbalanced splay tree on full reverse-string matches which
is good but not so good as it could be for large domain lists.

If it scales well and is faster than the existing dstdomain it would be a
welcome addition.

Amos



Re: [squid-users] Access control : How to block a very large number of domains

2009-06-15 Thread Henrik K
On Tue, Jun 16, 2009 at 03:12:07PM +1200, Amos Jeffries wrote:
 
 If it scales well and is faster than the existing dstdomain it would be a
 welcome addition.

If Squid still has the problem of not answering to clients during a reload,
it should be fixed also. Fast lookups are one thing, but loading the data
might take tens of seconds.



Re: [squid-users] Bypasing squid for certain sites

2009-06-15 Thread Amos Jeffries
On Mon, 15 Jun 2009 21:44:21 -0400, Jamie Orzechowski
jamie.orzechow...@gmail.com wrote:
 I am having issues with a few sites like megavideo, hotmail, etc and
 looking to bypass them entirely via IPTables ... I have added some
 rules to IPTables but I still see the traffic hitting the caches.  Any
 ideas?
 
 Strange thing is that when running an iptables --list it shows no
 rules configured at all ..

iptables -t mangle --list

;)

 
 Here is my iptables rules
 
 /usr/local/sbin/iptables -t mangle -N DIVERT
 /usr/local/sbin/iptables -t mangle -A DIVERT -j MARK --set-mark 1
 /usr/local/sbin/iptables -t mangle -A DIVERT -j ACCEPT
 /usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m socket -j
DIVERT
 
 #Bypass These subnets
 /usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
 80 -d 65.54.186.0/24 -j RETURN
 /usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
 80 -d 65.54.165.0/24 -j RETURN
 /usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
 80 -d 72.32.79.195/24 -j RETURN
 /usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
 80 -d 64.4.20.0/24 -j RETURN
 /usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
 80 -d 69.5.88.0/24 -j RETURN

Hmm, I'm not sure if RETURN works in a master level chain.

Perhapse a custom chain with the above and below rules all in it would
work?

Amos

 
 # Redirect to squid
 /usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp --dport 80 -j
 TPROXY --tproxy-mark 0x1/0x1 --on-port 3129
 
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100


Re: [squid-users] Tproxy Help // Transparent works fine

2009-06-15 Thread Amos Jeffries
On Mon, 15 Jun 2009 15:37:06 -0700, Alexandre DeAraujo al...@cal.net
wrote:
 I have a Transparent Proxy setup currently working and not seeing any
 problems while browsing. I am trying to setup squid to show
 client's IP instead of proxy server's IP.
 How do I go from this setup to implementing tproxy? Any pointers will be
 highly appreciated. 
 
 CentOS release 5.3 (Final)
 iptables v1.4.3.2
 Squid Cache: Version 3.0.STABLE16
 Linux 2.6.29.4-tproxy2 (custom kernel for tproxy)
 Cisco 7206VXR WCCPv2

Hmm, is that kernel Tproxy v2? or Tproxy v4 labeled as '2'?

Should just be an upgrade Squid to 3.1 release and follow the instructions
at:
 http://wiki.squid-cache.org/Features/Tproxy4

Amos



Re: [squid-users] 3rd email for RPC Over HTTPS issue

2009-06-15 Thread Mario Remy Almeida
Thanks Amos for the reply

I will go through that provided link.

If anyone having a working configurations could you'll please send it to
me.

//Remy


On Tue, 2009-06-16 at 14:38 +1200, Amos Jeffries wrote:
 On Mon, 15 Jun 2009 22:44:33 +0400, Mario Remy Almeida
 malme...@isaaviation.ae wrote:
  Hi All,
  
  This is my 3rd email for the below mentioned problem.
  I am writing this email in the hope that someone will reply and say if
  it can be done or not. Just yes or no will do for me so that I know it
  is possible or not.
  
  Successfully configure reverse proxy HTTPS but proxy with RPC Over HTTPS
  
  Squid 2.7STABLE6
  Windows 2008
  Exchange 2007
  
  Having issue with RPC over HTTPS, below is the error message
  
  Attempting to ping RPC Endpoint 6001 (Exchange Information Store) on
  server hubsexchange.airarabiauae.com  Failed to ping Endpoint 
  Additional Details   An RPC Error was thrown by the RPC Runtime. Error
  1818 1818
  
  Please let me know what could be the problem, some hint.
 
 
 Many people are successfully using RPC over Squid.
 Configured as per
 http://wiki.squid-cache.org/ConfigExamples/Reverse/OutlookWebAccess
 
 
 Error 1818 appears to be the problem. I cannot help an further sorry.
 
 This is not a squid issue AFAICT. Look for RPC or exchange documentation.
 Or even the MS errors information to find out what that means.
 
 
 Amos


--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


[squid-users] squid-manage localink and internasional link

2009-06-15 Thread sonjaya
Hi ...

i try to split link to locallink ( domestik peering ) with
internasional link , my problem  i don;t have bpg access so i try it
manual in squid .
here my squid.conf

as_whois_server whois.apnic.net

acl whois proto whois
acl IIX dst_as 7597

http_access Allow IIX
always_direct Allow whois
always_direct Allow IIX

cache_peer random.us.ircache.net Parent 3128 3130 login=xx...@x:x

i check with open webiste with internasional  working get cache from
us.ircache.net , but when i try to using local website always get from
us.irccache.net .

i chek in cachemgr.cgi no respond for asnumber cache .
so do i need another config in squid.conf to make it working split
local network and internasional ?

-- 
sonjaya
http://sicute.blogspot.com
http://www.videopingpong.web.id


[squid-users] squid_ldap_auth failure

2009-06-15 Thread Benjamin Fleckenstein

Hi there,

I've tried to set up a connection from a Squid Proxy (Version 2.6.STABLE10) to 
our AD Server (Windows 2003 Server). I've already tried several commands but 
there always appears an error. I already checked different forums and manuals 
but I don't get the connection to work.

For testing the connection I've tried the following command:

./squid_ldap_auth -R -b dc=my,dc=domain -D cn=username,dc=my,dc=domain -w 
password -f sAMAccountName=%s -h hostname:389
username password
squid_ldap_auth: WARNING, could not bind to binddn 'Invalid credentials'
ERR Invalid credentials

The user and password is correct. I've installed the ADSnapshot Tool to test if 
the user is able to quering the ldap server. That works!

Does anybody has an idea why I always get that error and what I could try to 
bring this to work? Could it be a bug or is there something wrong with my query?

For any help any ideas I would be thankful!

Lukas

-- 
GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate und Telefonanschluss
für nur 17,95 Euro/mtl.!* http://portal.gmx.net/de/go/dsl02