Re: [squid-users] Problem downloading large files

2012-04-20 Thread Iojan Sebastian

On 4/19/2012 6:22 AM, Leonardo wrote:

Bypassing the Linux bridge where the Squid runs solves the problem, so
apparently the problem lies at the Squid or OS level.

What OS are you?
I have seen 2 Gigs limits in Linux i386. Not sure where this applies.
Regards
Sebastian



Re: [squid-users] NTLM not working with HTTPS pages

2012-04-20 Thread Amos Jeffries

On 21/04/2012 4:01 a.m., Wladner Klimach wrote:

Amos,

what could be causing this? When I desable NTLM authentication or when
I use Kerberos all access go just fine, but when only NTLM is able I
can't get access to https pages and I get in the logs TCP_DENIED/407.
How can I debug it?


You need to locate and identify what request headers are being denied.

The easiest way with 3.1 is a packet dump with full packet bodies 
("tcpdump -s0 ..."). Then base-64 decode the www-authenticate headers 
from the client and check the type codes. NTLM has "NTLMSSPI" then a 
binary type number 1, 2 or 3.


The NTLM flow should be:

 client: makes request (no auth)
 Squid: emits 407 with NTLM advertised as available
 squid: [optionally closes the connection (due to "auth_param ntlm 
keep-alive off" hack)]

 client: repeat request with type-1 NTLM proxy-auth header
 squid: 407 with type-2 NTLM proxy-auth header
 client: repeat request with type-3 NTLM proxy-auth header
 squid: HTTP response
 client: [optionally make other requests with type-3 NTLM proxy-auth 
header]

 connection closes.


If you find connections opening and starting immediately with type-3 
token that is Kerberos or broken NTLM from the client.



Amos



regards

2012/4/20 Amos Jeffries:

On 21/04/2012 1:15 a.m., Harry Mills wrote:

Hi Wladner,

I don't think this is causing your problems, but I think you need to
change the following:

Instead of:

http_access deny CONNECT !Safe_ports

try:

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

Also, on the last two lines of your included config you have:

acl AUTENTICADO proxy_auth REQUIRED
http_access allow AUTENTICADO


This is one of several correct proxy-auth configurations.



I simply have:

http_access allow proxy_auth

I have no idea if this will help, but worth giving it a try perhaps?


?? for that to work you require this somewhere above your http_access rule
...

  acl proxy_auth proxy_auth REQUIRED

or some other definition for an ACL *label* "proxy_auth".

Amos




Re: [squid-users] ICAP service adaptation with service sets

2012-04-20 Thread Amos Jeffries

On 21/04/2012 5:53 a.m., Francis Fauteux wrote:

We are using Squid as an adaptation proxy, with a farm of ICAP RESPMOD servers 
running on a single host. Our (partial) configuration is thus:

icap_enable on

icap_service respmod_service1 respmod_precache 0 
icap://:/RESPMOD
icap_service respmod_service2 respmod_precache 0 
icap://:/RESPMOD

adaptation_service_set respmod_set respmod_service1 respmod_service2

adaptation_access respmod_set allow all

We would like to add an additional service to our proxy, which our current 
RESPMOD server would route our requests to in specific cases. If I understand 
the configuration guide correctly 
(http://www.squid-cache.org/Doc/config/icap_service/), I need to make the 
following changes:

* Modify the RESPMOD server to inject an "X-Next-Services: new_respmod_service" header 
to activate the new service, and inject an "X-Next-Services: " header to deactivate the new 
service.


Um, "activate" is a tricky word here. X-Next-Service tells Squid to use 
the named service(s) on the currently processing request. It does not do 
anything for other requests which "activate" implies.




* Modify the squid configuration thus:

icap_enable on

icap_service respmod_service1 respmod_precache 0 
icap://:/RESPMOD routing=1
icap_service respmod_service2 respmod_precache 0 
icap://:/RESPMOD routing=1

icap_service new_respmod_service1 respmod_precache 0 
icap://:/RESPMOD
icap_service new_respmod_service2 respmod_precache 0 
icap://:/RESPMOD

adaptation_service_set respmod_set respmod_service1 respmod_service2
adaptation_service_set new_respmod_set new_respmod_service1 
new_respmod_service2

adaptation_access respmod_set allow all
adaptation_access new_respmod_set allow all


Can you tell us whether this configuration is correct, and clarify the 
following:

* Does the RESPMOD server need to inject an "X-Next-Services: " header with no 
value to deactivate the new service, or will it be bypassed by default?


The header is per-request. Squid starts off with a plan for doing A then 
B then C filters from the squid.conf settings. X-Next-Services is an 
explicit instruction to erase that plan and replace it with a new set 
starting immediately.
With that I believe empty header to mean discard the old set and finish 
adaptation immediately.



* Each service has a farm of server processes for failover in case of error, but it seems the 
"X-Next-Services: new_respmod_server" header will route to a specific 
service, not a service set. Is there a way to route requests to a service set or, if not, to 
provide failover for the new service?


Hmm. I think you just have it send back the service set name 
"X-Next-Services: new_respmod_set". I'm not very familiar with the ICAP 
internal specifics though.





* We are using squid version 3.1.14, for which we cannot find the release notes 
(3.1.15 is the earliest version we found). Can you confirm that 3.1.14 supports 
service adaptation ?


http://www.squid-cache.org/Versions/v3/3.1/squid-3.1.14-RELEASENOTES.html
ftp://ftp.squid-cache.org/pub/archive/3.1/
No ICAP related changes from the current latest series release notes though.

Amos


Re: [squid-users] NTLM, non-domain machines and keep-alive

2012-04-20 Thread Harry Mills

Hi,

Firstly, thank you Amos for helping out here. I am finding it rather 
frustrating because I have enough knowledge on this subject get myself 
into trouble, but not enough to get myself back out of it!


On 20/04/2012 14:58, Amos Jeffries wrote:

On 20/04/2012 12:03 a.m., Harry Mills wrote:

Hi,

I have upgraded our squid to version 3.1.19 but I am still seeing the
repeated popup box issue with non-domain member machines (windows
machines).



Well, yes. Lookup the requriements for NTLM with actual security
enabled. #1 on the list is "join the client machine to domain" or some
wording to that effect.


This can be very frustrating! The problems I am facing are really caused 
by the fact that Windows clients, when presented with "negotiate" as an 
authentication option will choose NTLM when they are not members of the 
domain. This would be fine if they simply popped up a box *once* for the 
credentials, but having to type DOMAIN\username and a password three 
times before you are allowed access is difficult to explain to end users!



NTLM and its relative are domain-based authentication protocols, with a
centralized controller system. You are trying to make machines outside
the domain with no access to the DC secrets able to generate tokens
based on those secrets.

It used to "work" for NTLMv1 because it has a failure recovery action
which drops back to LM protocol which is frighteningly like Basic auth
protocol without any domain secrets to validate the machine is allowed
to be logged in with. None of the modern software permits that LM mode
to be used anymore without some manual security disabling.


I realise something has changed because our previous ( 4 years older ) 
squid with NTLM worked in exactly the way I would have expected. NTLM 
working for all domain machines, and a *single* popup authentication box 
for those clients which were not domain members - to be honest, I always 
assumed that the single authentication box was the browser falling back 
to Basic auth because it couldn't use NTLM.



Domain member machines authenticate perfectly via NTLM, but non-domain
member machines (Windows XP, Windows 7) pop up a password box three
times before accepting the credentials.

I have removed all the authentication directives _except_ the NTLM one
to simplify the troubleshooting.

If I asked Internet Explorer to save the credentials then the
authentication works fine and I get no further popup boxes. Chrome is
the same - as is Firefox, although interestingly Firefox will only
authenticate if the credentials have been stored. If they have not
been stored (using IE remember password) it plain refuses to
authenticate at all (no popup boxes or anything).


Wow strange behaviour from Firefox, do they have a bug report about this?


I have not come across one, but will check and present one if not.


The others are correct for a non-domain machine. When connected to a
domain the machine can validate that the requested NTLM domain/realm is
the same as the machien login one and use that for single-sign-on.
Without an existing domain login or pre-stored domain credentials to use
it is only to be expected the browser asks for popup to be filled out by
the user.


I realise the popup is necessary as there are no domain credentials to 
use, my confusion was that it pops up three times, my (probably 
confused) logic is that it should only need to ask once!



I am more than happy to work through this myself, but have exhausted
all my ideas. Could some one point me in the right direction?


While keep-alive / persistent connections *is* mandatory for NTLM to
work. The "auth_param ntlm keep-alive off" setting is a kind of special
adaptation to keep-alive, which sends the challenge signalling NTLM then
drops the connection. Forcing the client to open a new connection and
start it with the auth handshake requests. Once the handshake is started
the normal persistence settings take over.

It is a bit nasty and somewhat confusing. But thats the best we can do
with certain software.


Thank you for that explanation - it is confusing! All I really want to 
achieve is single-signon for the domain members, and a *single* password 
popup for non-domain members.


Thank you again for your help.

Regards

Harry



Amos





[squid-users] ICAP service adaptation with service sets

2012-04-20 Thread Francis Fauteux
We are using Squid as an adaptation proxy, with a farm of ICAP RESPMOD servers 
running on a single host. Our (partial) configuration is thus:

   icap_enable on

   icap_service respmod_service1 respmod_precache 0 icap://:/RESPMOD
   icap_service respmod_service2 respmod_precache 0 icap://:/RESPMOD

   adaptation_service_set respmod_set respmod_service1 respmod_service2

   adaptation_access respmod_set allow all

We would like to add an additional service to our proxy, which our current 
RESPMOD server would route our requests to in specific cases. If I understand 
the configuration guide correctly 
(http://www.squid-cache.org/Doc/config/icap_service/), I need to make the 
following changes:

* Modify the RESPMOD server to inject an "X-Next-Services: 
new_respmod_service" header to activate the new service, and inject an 
"X-Next-Services: " header to deactivate the new service.

* Modify the squid configuration thus:

   icap_enable on

   icap_service respmod_service1 respmod_precache 0 
icap://:/RESPMOD routing=1
   icap_service respmod_service2 respmod_precache 0 
icap://:/RESPMOD routing=1

   icap_service new_respmod_service1 respmod_precache 0 
icap://:/RESPMOD
   icap_service new_respmod_service2 respmod_precache 0 
icap://:/RESPMOD

   adaptation_service_set respmod_set respmod_service1 respmod_service2
   adaptation_service_set new_respmod_set new_respmod_service1 
new_respmod_service2

   adaptation_access respmod_set allow all
   adaptation_access new_respmod_set allow all


Can you tell us whether this configuration is correct, and clarify the 
following:

* Does the RESPMOD server need to inject an "X-Next-Services: " header with no 
value to deactivate the new service, or will it be bypassed by default?

* Each service has a farm of server processes for failover in case of error, 
but it seems the "X-Next-Services: new_respmod_server" header will route to 
a specific service, not a service set. Is there a way to route requests to a 
service set or, if not, to provide failover for the new service?

* We are using squid version 3.1.14, for which we cannot find the release notes 
(3.1.15 is the earliest version we found). Can you confirm that 3.1.14 supports 
service adaptation ?

Francis Fauteux
Software Engineer 



Re: [squid-users] NTLM not working with HTTPS pages

2012-04-20 Thread Wladner Klimach
Amos,

what could be causing this? When I desable NTLM authentication or when
I use Kerberos all access go just fine, but when only NTLM is able I
can't get access to https pages and I get in the logs TCP_DENIED/407.
How can I debug it?

regards

2012/4/20 Amos Jeffries :
> On 21/04/2012 1:15 a.m., Harry Mills wrote:
>>
>> Hi Wladner,
>>
>> I don't think this is causing your problems, but I think you need to
>> change the following:
>>
>> Instead of:
>>
>> http_access deny CONNECT !Safe_ports
>>
>> try:
>>
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>>
>> Also, on the last two lines of your included config you have:
>>
>> acl AUTENTICADO proxy_auth REQUIRED
>> http_access allow AUTENTICADO
>
>
> This is one of several correct proxy-auth configurations.
>
>
>>
>> I simply have:
>>
>> http_access allow proxy_auth
>>
>> I have no idea if this will help, but worth giving it a try perhaps?
>
>
> ?? for that to work you require this somewhere above your http_access rule
> ...
>
>  acl proxy_auth proxy_auth REQUIRED
>
> or some other definition for an ACL *label* "proxy_auth".
>
> Amos


[squid-users] Squid 3.1: access.log did not log authenticated members

2012-04-20 Thread David Touzeau


Dear

I have tested all log formats on my squid 3.1.19 and member information 
still "IP   -   - "

eg:  192.168.1.212 - - [

Is it normal ?

I notice that using squid 3.2 log correctly members uid in access.log

Best regards


Re: [squid-users] squid_ldap_auth to AD user credentials?

2012-04-20 Thread Amos Jeffries

On 19/04/2012 6:59 p.m., Beto Moreno wrote:

  Hi people.

  I had been reading info about squid_ldap_auth vs windows 2003 AD
server, I have some questions that would like to know if someone can
clear my brain.

  squid 2.7.x.


http://www.squid-cache.org/Versions/v2/2.HEAD/manuals/squid_ldap_auth.html



  Went a user have special characters on his password, once the browser
open the credential window it won't accept the user password and the
cache.log say:

squid_ldap_auth: WARNING, could not bind to binddn 'Invalid credentials'

Some knows this rare thing?


LDAP uses the word "bind" to mean query parameters for searching the 
directory/database for something.


Adding the debug (-d) option may explain a bit.



Second, what is the different between this to settings:

auth_param basic program /usr/local/libexec/squid/squid_ldap_auth -v 3
-b dc=example,dc=local -D cn=squid,cn=Users,dc=example,dc=local -w
password -f "sAMAccountName=%s" -u uid -P 192.168.50.104:389
auth_param basic program /usr/local/libexec/squid/squid_ldap_auth -v 3
-b dc=example,dc=local -D "squid@example.local" -w password -f
"sAMAccountName=%s" -u uid -P 192.168.50.104:389


The LDAP account used by Squid (-D option) differs in its representation 
syntax. see LDAP protocol for what it all means.



Both works.

  Last thing, do we need to use a super-user from AD to bind to the AD
server? or we just need a normal user?


You just said the "squid@example.local" account worked. Minimal 
privileges is recommended.


Amos


Re: [squid-users] heap LFUDA and squid 3.2.0.16

2012-04-20 Thread Amos Jeffries

On 21/04/2012 1:32 a.m., Kiril Dimitrov wrote:

thanks a lot,
I was afraid something like that would be the issue, alas what puzzled
me is that when u change the .conf after squid is already running and
do a -k reconfigure you don't get an error. Perhaps the removal policy
is only checked on initial start-up


Yes. That is a bug. Thanks for finding it.

Amos



Re: [squid-users] Encrypted (Basic) Authentication

2012-04-20 Thread Amos Jeffries

On 19/04/2012 8:38 p.m., Christoph Mitasch wrote:

Hello,

we have stored usernames and secure password hashes in a central
OpenLDAP directory.

We want to use Squid as a proxy for clients and require them to login
using the central LDAP directory.
This login should work over an encrypted connection since it's not an
option to send the password unencrypted. Logging the username in the
squid logs is also essential.

Using a weak hashing algorithm like the digest authentication does,
isn't a good option either.

I found the following solution, but I'm not suire if that's a good way
to go.
http://www.mikealeonetti.com/wiki/index.php/Squid_LDAP_transparent_proxy_authentication_script


Not relevant. That is for session-based authorization on intercepted 
traffic. It is not authentication despite the authors use of the term.

Basic auth protocol with its clear-text credentials is more secure.




What can you recommend?


What does the backend you are using LDAP protocol to access capable of?
Kerberos is best you can get in the way of secure authentication these 
days. Despite the limits it imposes on HTTP performance.



Alternatively you can try using a TLS connection to secure the transport 
between the web clients and Squid.

 http://wiki.squid-cache.org/Features/HTTPS#Encrypted_browser-Squid_connection

Amos


Re: [squid-users] NTLM, non-domain machines and keep-alive

2012-04-20 Thread Amos Jeffries

On 20/04/2012 12:03 a.m., Harry Mills wrote:

Hi,

I have upgraded our squid to version 3.1.19 but I am still seeing the 
repeated popup box issue with non-domain member machines (windows 
machines).




Well, yes. Lookup the requriements for NTLM with actual security 
enabled. #1 on the list is "join the client machine to domain" or some 
wording to that effect.


NTLM and its relative are domain-based authentication protocols, with a 
centralized controller system. You are trying to make machines outside 
the domain with no access to the DC secrets able to generate tokens 
based on those secrets.


It used to "work" for NTLMv1 because it has a failure recovery action 
which drops back to LM protocol which is frighteningly like Basic auth 
protocol without any domain secrets to validate the machine is allowed 
to be logged in with. None of the modern software permits that LM mode 
to be used anymore without some manual security disabling.



Domain member machines authenticate perfectly via NTLM, but non-domain 
member machines (Windows XP, Windows 7) pop up a password box three 
times before accepting the credentials.


I have removed all the authentication directives _except_ the NTLM one 
to simplify the troubleshooting.


If I asked Internet Explorer to save the credentials then the 
authentication works fine and I get no further popup boxes. Chrome is 
the same - as is Firefox, although interestingly Firefox will only 
authenticate if the credentials have been stored. If they have not 
been stored (using IE remember password) it plain refuses to 
authenticate at all (no popup boxes or anything).


Wow strange behaviour from Firefox, do they have a bug report about this?

The others are correct for a non-domain machine. When connected to a 
domain the machine can validate that the requested NTLM domain/realm is 
the same as the machien login one and use that for single-sign-on. 
Without an existing domain login or pre-stored domain credentials to use 
it is only to be expected the browser asks for popup to be filled out by 
the user.




I am more than happy to work through this myself, but have exhausted 
all my ideas. Could some one point me in the right direction?


While keep-alive / persistent connections *is* mandatory for NTLM to 
work. The "auth_param ntlm keep-alive off" setting is a kind of special 
adaptation to keep-alive, which sends the challenge signalling NTLM then 
drops the connection. Forcing the client to open a new connection and 
start it with the auth handshake requests. Once the handshake is started 
the normal persistence settings take over.


It is a bit nasty and somewhat confusing. But thats the best we can do 
with certain software.


Amos



Re: [squid-users] heap LFUDA and squid 3.2.0.16

2012-04-20 Thread Kiril Dimitrov
thanks a lot,
I was afraid something like that would be the issue, alas what puzzled
me is that when u change the .conf after squid is already running and
do a -k reconfigure you don't get an error. Perhaps the removal policy
is only checked on initial start-up

I will report again after i recompile with the option you suggested thanks again

On Fri, Apr 20, 2012 at 4:23 PM, Amos Jeffries  wrote:
> On 21/04/2012 12:58 a.m., Kiril Dimitrov wrote:
>>
>> I have the following issue
>>
>> squid version:
>> "kid1| Starting Squid Cache version 3.2.0.16-20120405-r11545 for
>> amd64-unknown-freebsd9.0..."
>> compiled with:
>> "./configure --prefix=/usr/local/squid/
>> --with-swapdir=/usr/local/squid/cache/
>> --with-pidfile=/usr/local/squid/ --with-logdir=/usr/local/squid/logs/
>> --disable-ipv6 --with-default-user=squid --enable-ssl
>> --enable-storeio="ufs aufs" --with-large-files --enable-icap-client"
>>
>> when trying to configure:
>> "cache_replacement_policy heap LFUDA"
>>
>> in /var/log/messages I get :
>> "(squid-1): ERROR: Unknown policy heap"
>>
>> and squid fails to start any ideas would be welcome
>
>
> ./configure --help
>
> ...
>
> "
>  --enable-removal-policies="list of policies"
>                          Build support for the list of removal policies. The
>                          default is only to build the "lru" module. See
>                          src/repl for a list of available modules, or
>                          Programmers Guide section 9.9 for details on how to
>                          build your custom policy
> "
>
> Amos


Re: [squid-users] NTLM not working with HTTPS pages

2012-04-20 Thread Amos Jeffries

On 21/04/2012 1:15 a.m., Harry Mills wrote:

Hi Wladner,

I don't think this is causing your problems, but I think you need to 
change the following:


Instead of:

http_access deny CONNECT !Safe_ports

try:

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

Also, on the last two lines of your included config you have:

acl AUTENTICADO proxy_auth REQUIRED
http_access allow AUTENTICADO


This is one of several correct proxy-auth configurations.



I simply have:

http_access allow proxy_auth

I have no idea if this will help, but worth giving it a try perhaps?


?? for that to work you require this somewhere above your http_access 
rule ...


 acl proxy_auth proxy_auth REQUIRED

or some other definition for an ACL *label* "proxy_auth".

Amos


Re: [squid-users] heap LFUDA and squid 3.2.0.16

2012-04-20 Thread Amos Jeffries

On 21/04/2012 12:58 a.m., Kiril Dimitrov wrote:

I have the following issue

squid version:
"kid1| Starting Squid Cache version 3.2.0.16-20120405-r11545 for
amd64-unknown-freebsd9.0..."
compiled with:
"./configure --prefix=/usr/local/squid/
--with-swapdir=/usr/local/squid/cache/
--with-pidfile=/usr/local/squid/ --with-logdir=/usr/local/squid/logs/
--disable-ipv6 --with-default-user=squid --enable-ssl
--enable-storeio="ufs aufs" --with-large-files --enable-icap-client"

when trying to configure:
"cache_replacement_policy heap LFUDA"

in /var/log/messages I get :
"(squid-1): ERROR: Unknown policy heap"

and squid fails to start any ideas would be welcome


./configure --help

...

"
  --enable-removal-policies="list of policies"
  Build support for the list of removal 
policies. The

  default is only to build the "lru" module. See
  src/repl for a list of available modules, or
  Programmers Guide section 9.9 for details on 
how to

  build your custom policy
"

Amos


Re: [squid-users] Re: DNS & Squid tree with parent - child

2012-04-20 Thread Amos Jeffries

On 21/04/2012 12:47 a.m., anita wrote:

Hi Amos,

I intend to use Squid for a satellite based communication network.
A child squid on one end will talk to the parent squid on the other end.

My understanding was that for every http request that does not have IP but
names instead, the child squid will do a dns lookup if it is a miss in its
cache before sending it to the parent. As the dns lookup will be expensive,
and will cause considerable delay (plus inherent delay due to satellite
networks), I had planned to accumulate some of the DNS look ups from the
parent over time and push it over to the child in the background. This way
the child squid will not have to do a dns lookup but it will be present in
its ipcache itself.

But when I tried it out in a small setup, it looked to me that the child
squid does not seem to do any lookups for the requested URL (it does only
for the PARENT) if the object is not found in its cache. Instead it simply
forwards it to the parent and the parent squid does the look up.
Can you please confirm on this if my understanding is correct? Thanks.




That test result is correct.

The cache is indexed by textual-URL and the TCP connection to the parent 
proxy is setup explicitly by cache_peer. If you have cache_peer setup 
with IP address or the peers configured FQDN in your child proxies 
/etc/hosts file there is no DNS lookup needed for HTTP relaying.


Which leaves only Host: / same-origin validation if you are intercepting 
traffic.
Or dst* ACLs being checked without the relevant domain or IP details 
being in the URL.



NP: pulling the DNS from parent to child will not help much. Squid obeys 
the DNS TTL values and the records need to be in the child before the 
first request makes use of them or DNS lookup will happen anyway.


If you or anyone wants to play around with this... Squid built with 
--disable-internal-dns makes use of a helper query instead of DNS 
packets. A default helper is bundled that uses the system resolver, but 
you can write a custom one with whatever system you like to supply Squid 
with DNS results.


Amos


Re: [squid-users] NTLM not working with HTTPS pages

2012-04-20 Thread Harry Mills

Hi Wladner,

I don't think this is causing your problems, but I think you need to 
change the following:


Instead of:

http_access deny CONNECT !Safe_ports

try:

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

Also, on the last two lines of your included config you have:

acl AUTENTICADO proxy_auth REQUIRED
http_access allow AUTENTICADO

I simply have:

http_access allow proxy_auth

I have no idea if this will help, but worth giving it a try perhaps?

Regards

Harry


On 19/04/2012 19:49, Wladner Klimach wrote:

Hello,

I'm using NTLM scheme like this:


auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 30 startup=5 idle=5
auth_param ntlm keep_alive on

And it is working fine except for https pages. Here is my basic squid.conf:


acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localhost src 127.0.0.1/32 ::1
acl manager proto cache_object

acl SSL_ports port 443
acl SSL_ports port 1863
acl SSL_ports port 563
acl SSL_ports port 465
acl SSL_ports port 995
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 563 # https
acl Safe_ports port 465 # https
acl Safe_ports port 995 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl CONNECT method CONNECT

http_access deny CONNECT !Safe_ports
http_access allow manager localhost
http_access deny manager
http_access deny to_localhost

follow_x_forwarded_for allow localhost
acl AUTENTICADO proxy_auth REQUIRED
http_access allow AUTENTICADO

regards,

Wladner




Re: [squid-users] how to use parent cache_peer with url_rewriter working on it

2012-04-20 Thread Amos Jeffries

On 20/04/2012 7:01 p.m., x-man wrote:

Hello there,

I am planning for squid implementation which consists of one main squid that
will server all the web except the  video sites and second squidbox that
will only deal with the video content.

As I know I have to use the cache_peer directive to tell the main squid that
it has to ask the video squid about a content (it will be based on ACLs).


No cache_peer tells Squid how to setup TCP connections to a peer. That 
is all.


cache_peer_access "will be" is what tells Squid *which* requests to pass 
there. The problem you are describing can be the result of not having 
those ACLs present. The child Squid only re-tries alternative paths if 
the parent proxy fails to supply a response for the client (ie link 
outages get routed around).




The problem that I see is that the second squid who is using url_rewriter
and local apache script to cache and deliver the video content will always
reply with cache miss, to the main squid, because for the squid this is not
cached content - as it is maintained by the url_rewriter and apache php
script - then the main squid will deliver the content from the internet.


URL re-writer does not "maintain" any part of HTTP. Its sole purpose is 
to alter the URL for a request before that request gets serviced.


What does Apache have to do with a two-Squid peering setup?




Someone can suggest workaround for this?


Only certain specific types of HTTP "route" failure status causes the 
main Squid to retry like you describe. You need the URL-rewriter NOT to 
cause 4xx/5xx errros.


You can disable the retries by using never_direct with exactly the same 
ACL rules used in cache_peer_access to select the parent cache. What 
that will do is cause the 4xx/5xx errors caused by your re-writer to be 
passed to be the client instead of the real video found and fetched.


Amos



[squid-users] heap LFUDA and squid 3.2.0.16

2012-04-20 Thread Kiril Dimitrov
I have the following issue

squid version:
"kid1| Starting Squid Cache version 3.2.0.16-20120405-r11545 for
amd64-unknown-freebsd9.0..."
compiled with:
"./configure --prefix=/usr/local/squid/
--with-swapdir=/usr/local/squid/cache/
--with-pidfile=/usr/local/squid/ --with-logdir=/usr/local/squid/logs/
--disable-ipv6 --with-default-user=squid --enable-ssl
--enable-storeio="ufs aufs" --with-large-files --enable-icap-client"

when trying to configure:
"cache_replacement_policy heap LFUDA"

in /var/log/messages I get :
"(squid-1): ERROR: Unknown policy heap"

and squid fails to start any ideas would be welcome


[squid-users] Re: DNS & Squid tree with parent - child

2012-04-20 Thread anita
Hi Amos,

I intend to use Squid for a satellite based communication network.
A child squid on one end will talk to the parent squid on the other end.

My understanding was that for every http request that does not have IP but
names instead, the child squid will do a dns lookup if it is a miss in its
cache before sending it to the parent. As the dns lookup will be expensive,
and will cause considerable delay (plus inherent delay due to satellite
networks), I had planned to accumulate some of the DNS look ups from the
parent over time and push it over to the child in the background. This way
the child squid will not have to do a dns lookup but it will be present in
its ipcache itself.

But when I tried it out in a small setup, it looked to me that the child
squid does not seem to do any lookups for the requested URL (it does only
for the PARENT) if the object is not found in its cache. Instead it simply
forwards it to the parent and the parent squid does the look up. 
Can you please confirm on this if my understanding is correct? Thanks.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/DNS-Squid-tree-with-parent-child-tp4573394p4573819.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid load balancing n cluster

2012-04-20 Thread Amos Jeffries

On 21/04/2012 12:16 a.m., Ibrahim Lubis wrote:

I need some info about load balancing n cluster in squid , is there use some 
cluster software or just use icap ?


HTTP multiplexing / load balancing is built into Squid. 
http://wiki.squid-cache.org/Features/LoadBalance


You can also use any TCP/IP load balancing device to pass connections to 
individual Squid's.


Amos



Re: [squid-users] DNS & Squid tree with parent - child

2012-04-20 Thread Amos Jeffries

On 20/04/2012 9:59 p.m., anita wrote:

Hi All,

I am using squid 3.1.16 version.
I am looking into extending the DNS feature to suit my application.


Please explain? what type of app do you have that requires anything 
outside of regular HTTP handling?


HTTP URLs contain a hostname or IP. DNS is needed sometimes to convert 
these to IPs.
TCP connections used by HTTP are made with IPs. DNS is sometimes needed 
to locate the PTR FQDN for logging or access control purposes.


Squid already does these DNS lookups *if* needed.



I have a query here regarding the basic DNS feature in Squid.

I am using internal dns client&  localdomain as my dns server.

My Setup:
1 Child squid (sitting in machine 1) ->  1 Parent Squid (sitting in machine
2) ->  Apache Server (sitting in machine 2)

There are no siblings.
I am running a browser client, wget in this case, to fetch a URL that is not
present in both child&  parent cache.

In this case:
1.  Does the Child Squid do any dns lookup of the link requested by wget
before sending it to the parent squid when it is a MISS? Or is it done only
by the parent when the child declares it as a miss?


Each Squid does the DNS lookups it needs to perform your configured 
handling.


The child Squid MAY perform DNS lookups to locate the parent Squid.
The parent Squid MAY to DNS lookups to identify the URL host to fetch from.

Amos


[squid-users] Re: Re: Re: Re: squid_kerb_auth High CPU load.

2012-04-20 Thread Markus Moeller
Can you also send me the extract from cache.log for the same period ? Do you 
use the -d debug flag with squid_kerb_auth ?

Markus

"Markus Moeller"  wrote in message 
news:jmrkhi$42v$1...@dough.gmane.org...

Hi Simon,

 The config is standard and looks OK.  Can you run strace (strace -f -F -o 
/tmp/squid_kerb_auth.strace -p ) for 1-2 min against the process when 
it is busy and send me the output ?


Markus

"Simon Dwyer"  wrote in message 
news:1334876889.2408.45.ca...@sdwyer.federalit.net...

Not sure how to give you the figures of req/sec but this morning when i
flicked it over there would have been max 15 people using it for normal
browsing.

following is my krb5.conf incase i am missing something or doing
something wrong.

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = MULAWA.INTERNAL
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
default_tkt_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5
default_tgs_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5

[realms]

MULAWA.INTERNAL = {
 kdc = dc-hbt-01.mulawa.internal
 kdc = dc-hbt-02.mualwa.internal
}

[domain_realm]
mulawa.internal = MULAWA.internal
.mulawa.internal = MULAWA.internal




On Thu, 2012-04-19 at 23:36 +0100, Markus Moeller wrote:

How many request/sec does your squid serve ? I would not expect it to be
that much higher then with NTLM.

Markus

"Simon Dwyer"  wrote in message
news:1334870417.2408.38.ca...@sdwyer.federalit.net...
> Moved my production over to kerberos this morning with the correct
> export for kerberos and this is whats happening
>
> 20711 squid 20   0 32212 3748 1732 R 34.3  0.1   0:04.42
> squid_kerb_auth
> 20716 squid 20   0 32200 3748 1732 R 34.3  0.1   0:08.41
> squid_kerb_auth
> 20712 squid 20   0 30544 2196 1732 S 20.6  0.1   0:28.23
> squid_kerb_auth
>
> They are just the top 3 processes.
>
> When i am not using kerberos authentication my cpu is hardly touched.
>
> Any insight would be awesome.
>
> Simon
>
> On Thu, 2012-04-19 at 16:03 +1000, Simon Dwyer wrote:
>> Hi Markus,
>>
>> I have actually got this now setup on a second machine.
>>
>> When i put in the export the HTTP_23 does not appear anymore which i 
>> am

>> expecting.
>>
>> I will double check this in production tomorrow morning and see how i
>> go.
>>
>> Simon
>>
>> On Thu, 2012-04-19 at 15:49 +1000, Simon Dwyer wrote:
>> > Hi Markus,
>> >
>> > I do have a
>> >
>> > -rw---. 1 squid squid92907 Apr 19 08:21 HTTP_23
>> >
>> > which may have been the last time i tried to run it this morning.
>> >
>> > I wont be able to try it again till tomorrow morning to see if it
>> > modifies it
>> >
>> > Cheers,
>> >
>> > Simon
>> >
>> > On Thu, 2012-04-19 at 06:44 +0100, Markus Moeller wrote:
>> > > Hi Simon,
>> > >
>> > >   Unfortunately I do not have a production environment to give 
>> > > you

>> > > average
>> > > usage numbers.
>> > >
>> > >   Can you check that you don't have a file in /var/tmp like (or 
>> > > at

>> > > least is
>> > > not modified):
>> > >
>> > > -rw--- 1 squid nogroup 603 Apr  7 01:13
>> > > /var/tmp/opensuse12--HTTP-044_31
>> > >
>> > >   This is the replay cache if not disabled.
>> > >
>> > > Markus
>> > >
>> > > "Simon Dwyer"  wrote in message
>> > > news:1334813176.2408.29.ca...@sdwyer.federalit.net...
>> > > > Hi Markus,
>> > > >
>> > > > This is in the /etc/init.d/squid
>> > > >
>> > > > if [ -f /etc/sysconfig/squid ]; then
>> > > >. /etc/sysconfig/squid
>> > > > fi
>> > > >
>> > > > What should the cpu usage be of each squid_kerb_auth process 
>> > > > when

>> > > > used?
>> > > >
>> > > > Cheers,
>> > > >
>> > > > Simon
>> > > >
>> > > > On Thu, 2012-04-19 at 06:15 +0100, Markus Moeller wrote:
>> > > >> Are you sure /etc/sysconfig/squid is sourced by the squid 
>> > > >> startup

>> > > >> script
>> > > >> ?
>> > > >> Markus
>> > > >>
>> > > >> "Simon Dwyer"  wrote in message
>> > > >> news:1334789097.2408.17.ca...@sdwyer.federalit.net...
>> > > >> > Hi all,
>> > > >> >
>> > > >> > I have got kerberos working and moved it to production but 
>> > > >> > then

>> > > >> > the
>> > > >> > server started smashing its cpu.  It seems that the
>> > > >> > squid_kerb_auth
>> > > >> > processes are killing the cpu.
>> > > >> >
>> > > >> > I have the following in my config.
>> > > >> >
>> > > >> > /etc/sysconfig/squid/
>> > > >> >
>> > > >> > KRB5RCACHETYPE=none
>> > > >> > export KRB5RCACHETYPE
>> > > >> >
>> > > >> > /etc/squid/squid.conf
>> > > >> >
>> > > >> > auth_param negotiate program  /usr/bin/negotiate_wrapper
>> > > >> > --kerberos /usr/lib64/squid/squid_kerb_auth -i -r -s
>> > > >> > GSS_C_NO_NAME
>> > > >> > --ntlm 
>> > > >> > /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp

>> > > >> > --domain=DOMAIN.EXAMPLE
>> > > >> > auth_param negotiate children 30
>> > > >> > auth_param negotiate keep_alive on
>> > > >> >
>> > > >

[squid-users] Re: Re: Re: Re: squid_kerb_auth High CPU load.

2012-04-20 Thread Markus Moeller

Hi Simon,

 The config is standard and looks OK.  Can you run strace (strace -f -F -o 
/tmp/squid_kerb_auth.strace -p ) for 1-2 min against the process when 
it is busy and send me the output ?


Markus

"Simon Dwyer"  wrote in message 
news:1334876889.2408.45.ca...@sdwyer.federalit.net...

Not sure how to give you the figures of req/sec but this morning when i
flicked it over there would have been max 15 people using it for normal
browsing.

following is my krb5.conf incase i am missing something or doing
something wrong.

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = MULAWA.INTERNAL
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
default_tkt_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5
default_tgs_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5

[realms]

MULAWA.INTERNAL = {
 kdc = dc-hbt-01.mulawa.internal
 kdc = dc-hbt-02.mualwa.internal
}

[domain_realm]
mulawa.internal = MULAWA.internal
.mulawa.internal = MULAWA.internal




On Thu, 2012-04-19 at 23:36 +0100, Markus Moeller wrote:

How many request/sec does your squid serve ? I would not expect it to be
that much higher then with NTLM.

Markus

"Simon Dwyer"  wrote in message
news:1334870417.2408.38.ca...@sdwyer.federalit.net...
> Moved my production over to kerberos this morning with the correct
> export for kerberos and this is whats happening
>
> 20711 squid 20   0 32212 3748 1732 R 34.3  0.1   0:04.42
> squid_kerb_auth
> 20716 squid 20   0 32200 3748 1732 R 34.3  0.1   0:08.41
> squid_kerb_auth
> 20712 squid 20   0 30544 2196 1732 S 20.6  0.1   0:28.23
> squid_kerb_auth
>
> They are just the top 3 processes.
>
> When i am not using kerberos authentication my cpu is hardly touched.
>
> Any insight would be awesome.
>
> Simon
>
> On Thu, 2012-04-19 at 16:03 +1000, Simon Dwyer wrote:
>> Hi Markus,
>>
>> I have actually got this now setup on a second machine.
>>
>> When i put in the export the HTTP_23 does not appear anymore which i 
>> am

>> expecting.
>>
>> I will double check this in production tomorrow morning and see how i
>> go.
>>
>> Simon
>>
>> On Thu, 2012-04-19 at 15:49 +1000, Simon Dwyer wrote:
>> > Hi Markus,
>> >
>> > I do have a
>> >
>> > -rw---. 1 squid squid92907 Apr 19 08:21 HTTP_23
>> >
>> > which may have been the last time i tried to run it this morning.
>> >
>> > I wont be able to try it again till tomorrow morning to see if it
>> > modifies it
>> >
>> > Cheers,
>> >
>> > Simon
>> >
>> > On Thu, 2012-04-19 at 06:44 +0100, Markus Moeller wrote:
>> > > Hi Simon,
>> > >
>> > >   Unfortunately I do not have a production environment to give you
>> > > average
>> > > usage numbers.
>> > >
>> > >   Can you check that you don't have a file in /var/tmp like (or at
>> > > least is
>> > > not modified):
>> > >
>> > > -rw--- 1 squid nogroup 603 Apr  7 01:13
>> > > /var/tmp/opensuse12--HTTP-044_31
>> > >
>> > >   This is the replay cache if not disabled.
>> > >
>> > > Markus
>> > >
>> > > "Simon Dwyer"  wrote in message
>> > > news:1334813176.2408.29.ca...@sdwyer.federalit.net...
>> > > > Hi Markus,
>> > > >
>> > > > This is in the /etc/init.d/squid
>> > > >
>> > > > if [ -f /etc/sysconfig/squid ]; then
>> > > >. /etc/sysconfig/squid
>> > > > fi
>> > > >
>> > > > What should the cpu usage be of each squid_kerb_auth process 
>> > > > when

>> > > > used?
>> > > >
>> > > > Cheers,
>> > > >
>> > > > Simon
>> > > >
>> > > > On Thu, 2012-04-19 at 06:15 +0100, Markus Moeller wrote:
>> > > >> Are you sure /etc/sysconfig/squid is sourced by the squid 
>> > > >> startup

>> > > >> script
>> > > >> ?
>> > > >> Markus
>> > > >>
>> > > >> "Simon Dwyer"  wrote in message
>> > > >> news:1334789097.2408.17.ca...@sdwyer.federalit.net...
>> > > >> > Hi all,
>> > > >> >
>> > > >> > I have got kerberos working and moved it to production but 
>> > > >> > then

>> > > >> > the
>> > > >> > server started smashing its cpu.  It seems that the
>> > > >> > squid_kerb_auth
>> > > >> > processes are killing the cpu.
>> > > >> >
>> > > >> > I have the following in my config.
>> > > >> >
>> > > >> > /etc/sysconfig/squid/
>> > > >> >
>> > > >> > KRB5RCACHETYPE=none
>> > > >> > export KRB5RCACHETYPE
>> > > >> >
>> > > >> > /etc/squid/squid.conf
>> > > >> >
>> > > >> > auth_param negotiate program  /usr/bin/negotiate_wrapper
>> > > >> > --kerberos /usr/lib64/squid/squid_kerb_auth -i -r -s
>> > > >> > GSS_C_NO_NAME
>> > > >> > --ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
>> > > >> > --domain=DOMAIN.EXAMPLE
>> > > >> > auth_param negotiate children 30
>> > > >> > auth_param negotiate keep_alive on
>> > > >> >
>> > > >> > From what i have read the first part should fix the high cpu
>> > > >> > issue but
>> > > >> > it doesnt seem to help.
>> > > >> >
>> > > >> > More the case i am having trouble getting that variable 
>> > > >> > active.

>> > > >> >
>> 

[squid-users] Squid load balancing n cluster

2012-04-20 Thread Ibrahim Lubis
I need some info about load balancing n cluster in squid , is there use some 
cluster software or just use icap ?

Re: [squid-users] ESI support in SQUID

2012-04-20 Thread Amos Jeffries

On 19/04/2012 9:06 p.m., Dirk Högemann wrote:

Hi,

I am trying to run SQUID with ESI support (parser is custom).


Squid version?



This works fine until processed pages reference for example Javascript files
with filesize bigger than 64K.
In that case SQUID crashes.
LOG:

2012/04/19 10:47:19.295| clientStreamCallback: Calling 1 with cbdata
0x85ecfac from node 0x85dbde0
2012/04/19 10:47:19.295| esiProcessStream: Processing thisNode 0x86345d0
context 0x85ecf20 offset 65126 length 4096
2012/04/19 10:47:19.295| assertion failed: String.cc:197: "len_ + len<
65536"



Are there any workaround for this issue?


No. Squid has a hard-coded string length limit of 64KB. ESI needs a 
re-write to work without the string buffer, possibly to stream the 
construced reply straight to client or chunk it into 64KB pieces.



Or is it possible to configure SQUID in a way  to skip ESI processing for
example files/content by url path?


ESI processing is determined by the origin web server. The Surrogate-* 
header targets a particular surrogate proxy to initiate ESI processing 
(or not) per-request.



PS.  "SQUID" is an electronic component like a diode. "Squid" is the proxy.

Amos


Re: [squid-users] Problem downloading large files

2012-04-20 Thread Amos Jeffries

On 19/04/2012 9:22 p.m., Leonardo wrote:

Hi all,

We noticed that users behind our Squid cannot download files larger
than 2 Gb: the connection is cut around that limit.

reply_body_max_size is not set in squid.conf so download file size
should be unlimited.

I've done a tcpdump capture and examined it on Wireshark; I see a FIN
from the remote server after that 2 Gb have been transferred.


You would seem not to have large file support (LFS, aka 64-bit 
filesystem) built into Squid or possibly the box its running on.



Bypassing the Linux bridge where the Squid runs solves the problem, so
apparently the problem lies at the Squid or OS level.

Squid is version 3.1.7 with configure options:
'--enable-linux-netfilter' '--enable-wccp' '--prefix=/usr'
'--localstatedir=/var' '--libexecdir=/lib/squid' '--srcdir=.'
'--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience


Was this built on a 32-bit or 64-bit system? 32-bit builds require 
"--with-large-files"


Also, can you update your Squid to the currently supported release?

Amos


Re: [squid-users] current status of bump-server-first + dynamic certs in 3.3??

2012-04-20 Thread Amos Jeffries

On 19/04/2012 9:54 p.m., Ahmed Talha Khan wrote:

Hey all,

I want to use dynamic certificates (and/or mimic original ssl server
certs) while running in a transparent mode. I know this is not
possible in 3.2 because of the bump-client-first approach. Release
roadmap for squid 3 says that bump-server-first is(will be) available
in 3.3 which is under dev right now. Mimicking original ssl server
cert is also available in 3.3.

I want to know about the current status of these 2 features in 3.3.
How far along are they in the testing and how much stable is it. Are
the 2 features working correctly or to some extent? Can i start using
them right now and get more confidence when the release matures. Any
anticipated dates for stable 3.3?


squid-dev is the place to ask that or to the developers listed in the 
wiki as responsible for it. I only know about what has been comitted, 
and sometimes not even that very well. The other dev do not follow this 
stable release user help list regularly.


Amos


[squid-users] Re: Correctoions (was TCP_SWAPFAIL/200)

2012-04-20 Thread Amos Jeffries

On 20/04/2012 8:30 a.m., Linda Walsh wrote:

Amos Jeffries wrote:


On 18.04.2012 12:46, Linda Walsh wrote:



http_access allow CONNECT Safe_Ports


NOTE: Dangerous. Safe_Ports includes port 1024-65535 and other ports 
unsafe to permit CONNECT to. This could trivially be used as a 
multi-stage spam proxy or worse.
  ie a trivial DoS of "CONNECT localhost:8080 HTTP/1.1\n\n" results 
in CONNECT loop until your machines port are all used up.



Good point, Just wanted to allow the general case of SSL/non-SSL over 
any of the
ports.  Just tryig to get things working at this point... though have 
had his config for soem time and no probs -- only connector is on my 
side and 'me', so

I shouldn't deny myself my own service unless I try!  ;-)



Thats part of the point. There is nothing restricting this allow to just 
you. It allows CONNECT to anywhere with any of those ports. Better to 
just omit the normal "deny CONNECT SSL_ports" and leave the allow rule 
being the "allow localnet" one. That way you can do anything, but others 
can't abuse the proxy.



cache_mem   8 GB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir aufs /var/cache/squid 65535 64 64


You have multiple workers configured. AUFS does not support SMP at 
this time. That could be the problem you have with SWAPFAIL, as the 
workers collide altering the cache contents.


---
Wah?   .. but but...how do I make use of SMP with AUFS?

If I go with uniq cache dirs that's very sub-optimal -- since I end up
with 12 separate cache areas, no?  when I want to fetch something from
the catch is there coordination about what content is in which 
worker's cache
that will automatically invoke the correct worker?   -- If so, that's 
cool,

but if not, then I'll reduce my hit rate by 1/N-cpus



There is shared memory doing things I have not quite got my own head 
around yet. I think its just shared cache_mem and rock storage which are 
cross-worker coordinated. The others AFAIK still need tranditional 
multi-process coordination like HTCP/ICP/CARP between worker processes.








To use this cache either wrap it in "if ${process_number} = N" tests 
for the workers you want to do caching. Or add ${process_number} to 
the path for each worker to get its own unique directory area.


eg:
 cache_dir aufs /var/cache/squid_${process_number} 65535 64 64

or
if ${process_number} = 1
 cache_dir aufs /var/cache/squid 65535 64 64
endif





--- As said above, how do I get multi-benefit with asynchronous writes
and multi core?


At present only "rock" type cache_dir (for small <32K objects) and 
cache_mem support SMP. To get 3.2 released stable this year we had to 
cut short from full SMP support across the board :-(. It is coming one 
day, with sponsorship that day can come faster, but its not today.




url_rewrite_host_header off
url_rewrite_access deny all
url_rewrite_bypass on


You do not have any re-writer or redirector configured. These 
url_rewrite_* can all go.


-
Is it harmful (it was for future 'expansion plans' -- no
rewriters yet, but was planning...)


No. Just a speed drag at present.







refresh_pattern -i (/cgi-bin/|\?) 0 0%  0


This above pattern ...




 above what pattern?



"refresh_pattern -i (/cgi-bin/|\?) 0 0%  0"





refresh_pattern -i \.(ico|gif|jpg|png)   0 20%   4320
ignore-no-cache ignore-private override-expire
refresh_pattern -i ^http:   0 20%   4320ignore-no-cache 
ignore-private


"private" means the contents MUST NOT be served to multiple clients. 
Since you say this is a personal proxy just for you, thats okay but 
be carefulif you ever open it for use by other people. Things like 
your personal details embeded in same pages are cached by this.



Got it... I should add a comment in that area to that effect


That might be a enhancement -- like -
ignore-private-same-client




"no-cache" *actually* just means check for updates before using the 
cached version. This is usually not as useful as many tutorials make 
it out to be.


---
Well, dang tutorials -- I'm screwed if I follow, and if I don't! ;-)



Sad, eh?








refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440


 ... is meant to be here (second to last).


refresh_pattern .   0   20% 4320
read_ahead_gap 256 MB


Uhm... 256 MB buffering per request sure you want to do that?



I **think*** so... doesn't that mean it will buffer up to 256MB
of a request before my client is read for it?


Yes, exactly so. In RAM, which is the risky part. If Squid process 
starts swapping your service speed goes down the drain very fast.




I think of the common case where I am saving a file and it takes me
a while to find the dir to save to.  I tweaked a few params in this area,
and it went from having to wait after I decided, to by the time I 
decided, it

was already downloaded.

Would this be 

[squid-users] DNS & Squid tree with parent - child

2012-04-20 Thread anita
Hi All,

I am using squid 3.1.16 version.
I am looking into extending the DNS feature to suit my application.
I have a query here regarding the basic DNS feature in Squid.

I am using internal dns client & localdomain as my dns server.

My Setup:
1 Child squid (sitting in machine 1) -> 1 Parent Squid (sitting in machine
2) -> Apache Server (sitting in machine 2)

There are no siblings.
I am running a browser client, wget in this case, to fetch a URL that is not
present in both child & parent cache.

In this case:
1.  Does the Child Squid do any dns lookup of the link requested by wget
before sending it to the parent squid when it is a MISS? Or is it done only
by the parent when the child declares it as a miss?

I have narrowed it down to look at ipcache.cc & dns_internal.cc for these
functionalities. Please let me know if I need to look into any other files.

Any help will be greatly appreciated.
Thanks in advance !

Regards,
Anita

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/DNS-Squid-tree-with-parent-child-tp4573394p4573394.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] how to use parent cache_peer with url_rewriter working on it

2012-04-20 Thread x-man
Hello there,

I am planning for squid implementation which consists of one main squid that
will server all the web except the  video sites and second squidbox that
will only deal with the video content. 

As I know I have to use the cache_peer directive to tell the main squid that
it has to ask the video squid about a content (it will be based on ACLs). 

The problem that I see is that the second squid who is using url_rewriter
and local apache script to cache and deliver the video content will always
reply with cache miss, to the main squid, because for the squid this is not
cached content - as it is maintained by the url_rewriter and apache php
script - then the main squid will deliver the content from the internet.

Someone can suggest workaround for this?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-parent-cache-peer-with-url-rewriter-working-on-it-tp4573061p4573061.html
Sent from the Squid - Users mailing list archive at Nabble.com.