Re: [squid-users] Squid + TOS or DSCP based on domain to client

2011-06-10 Thread Amos Jeffries

On 09/06/11 21:42, christ...@wanxp.com wrote:

Dear,

as i know we can set TOS or DSCP on squid based on HIT or MISS using ZPH
Feature. but is there any way to set TOS/DSCP based on acl domain ? in
fact i want to set TOS 0x2 for domain www.facebook.com when return to
client. so i can give better QoS based on TOS/DSCP.
i tried tcp_outgoing_tos but have no luck. it seem work only for
outgoing to internet not for returning data to client. any suggestion ?


clientside_tos directive is available in squid-3.x for this.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] FATAL: Too many queued redirector requests

2011-06-10 Thread Amos Jeffries

On 09/06/11 20:16, Le Trung Kien wrote:

Hi, I don't know how to handle memory for squid. Squid automatically
breaks down itself and restarts:


http://wiki.squid-cache.org/SquidFaq/SquidMemory



In my squid.conf I set:

url_rewrite_children 120 startup=30 idle=100 concurrency=0


You need 3.2 for the dynamic startup capabilities. 3.1 only uses the 
first value (120 helpers to start *immediately*).


Given that is actually a reasonable number of helpers I think the memory 
consumption elsewhere is probably killing them.





max_filedesc 7168
cache_swap_low 60
cache_swap_high 80


If you are forced to do that with swap values it means you cache 
management is badly calculated.


There is no use allocating for example 100GB of disk storage when 
cache_swap_low forces 40% of it not to be used. And Squid pausing 
occasionally while 20% of it (80%-60%) is erased can also cause visible 
slowdown to clients.


The memory FAQ page linked above outlines how to estimate cache sizes.



I notice that just one pending requests queued:

2011/06/09 11:00:17| WARNING: All redirector processes are busy.
2011/06/09 11:00:17| WARNING: 1 pending requests queued

and follow is more details of cache.log:

2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory




2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:16| ipcCreate: fork: (12) Cannot allocate memory
2011/06/09 11:00:16| WARNING: Cannot run
'/opt/squid-3.1.10/urlrewriter_new.pl' process.
2011/06/09 11:00:17| WARNING: All redirector processes are busy.
2011/06/09 11:00:17| WARNING: 1 pending requests queued
2011/06/09 11:00:17| storeDirWriteCleanLogs: Starting...
2011/06/09 11:00:17| WARNING: Closing open FD  177
2011/06/09 11:00:17|   Finished.  Wrote 16367 entries.
2011/06/09 11:00:17|   Took 0.01 seconds (1308941.14 entries/sec).
FATAL: Too many queued redirector requests
Squid Cache (Version 3.1.10): Terminated abnormally.


Your machine seems not to be capable of running 120 or these rewriter 
processes. You need more available to them RAM or less maximum helpers.




--  CRASH ***
CPU Usage: 6631.678 seconds = 4839.266 user + 1792.412 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 3299


Page faults - your system is swapping badly to get that many just on 
startup. If this continues it will kill Squid performance later even if 
it gets past the helpers.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] How to disable Regular Proxy Access under Interception Mode?

2011-06-10 Thread Amos Jeffries

On 09/06/11 23:11, kkk kkk wrote:

Hi everyone,

I'm running Squid 3.1 in Interception mode that is set to intercept
traffic to a list of 10 websites.
One security concern I have is that anyone in my ACL can enter my
proxy IP and port in their browser and use it as a regular proxy.

Is there a way to disable this access? If I can disable this access,


You fail to say which NAT infrastructure is being used to intercept.

The Linux intercept examples have been updated to include rules in the 
mangle netfilter table which provide this protection.

  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect

If you are using some other form of NAT, nobody has (yet) provided any 
extra details about solving this problem.



no one can abuse my service because I can control what dstDomains will
use my proxy.


Your configuration displays that this claim is probably false. see below.



This is my current setup:

acl allowed_IP src IP
http_access allow allowed_IP


Anyone in the allowed_IP list can do anything they like regardless of 
domain.



http_access deny all


Only Domains want to allow access:
acl allowed_domains dstdomain


If it's not theoretical possible, how can I write an ACL combo that
only allow allowed_ip to access allowed_domains instead of
accessing everything once it's allowed?


Access controls in Squid are complete boolean logic language. Anything 
that can be described in if-else form can be configured.

  http://wiki.squid-cache.org/SquidFaq/SquidAcl#Common_Mistakes

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Tom Tux
From squid.conf.documented:
--- SNIP ---
#  TAG: cache_mgr
#   Email-address of local cache manager who will receive
#   mail if the cache dies.  The default is webmaster.
#Default:
# cache_mgr webmaster
--- SNAP ---

As I understand, squid will send a mail to the defined mail-recipient,
if squid recognize that something is hardly wrong and so he will die.
An hereby my question: How can I force squid to die (not kill -9 or
kill -15 because this will kill squid immediately respectively
shutdown squid commonly) to check that this mail will be sent?

Many thanks.
Tom

2011/6/10 Helmut Hullen hul...@t-online.de:
 Hallo, david,

 Du meintest am 09.06.11:

 Brute force (under a linux distribution):

        killall squid

 (if you are curious: repeat the command)

 actually, you do need to repeat the command. Squid doesn't stop
 immediatly when it gets the standard kill signal (15), it does some
 sort of graceful shutdown that can take a significant amount of time
 to actually take place.

 [...]

 what I do is to do killall squid in a loop until the killall command
 returns an error that there is no process to kill.

 Ok - I've seen this behaviour too but haven't yet studied it deeply.
 Thank you!

 Viele Gruesse!
 Helmut



Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Amos Jeffries

On 10/06/11 08:32, da...@lang.hm wrote:

On Thu, 9 Jun 2011, Helmut Hullen wrote:


Hallo, Tom,

Du meintest am 09.06.11:



I need to verify, if I get a mail when squid dies (I put a correct
mailaddress in cache_mgr-directive). Is there a way/hack to force
squid to die?


Yes - this daemon is started from a script p.e. with the command

/etc/init.d/squid start

(the command depends on your distribution).

And stopping squid is nearly the same command.

Brute force (under a linux distribution):

killall squid

(if you are curious: repeat the command)


actually, you do need to repeat the command. Squid doesn't stop
immediatly when it gets the standard kill signal (15), it does some sort
of graceful shutdown that can take a significant amount of time to
actually take place.

I ran into this when trying to upgrade squid, if I do killall squid;
/etc/init.d/squid start I frequently get a message from the start that
squid is still running, but then a little while later squid will stop
(and since the start command failed, an outage occures)

I've also found that the number of killall cycles you need to go through
varies depending on the squid version and config, with versions other
than 3.2 with more than one worker I need to do two killall commands to
get squid to stop, with 3.2 and more than one worker I find I need three.

what I do is to do killall squid in a loop until the killall command
returns an error that there is no process to kill.


Whereas what you should do to _abort_ Squid is simply a:
  squid -k kill

I do hope you realize that killing Squid with -k kill, the kill or 
killall commands causes you worse problems in the long term.


  The slow graceful shutdown is trading a small amount of time on 
shutdown for about 10x to 20x the same time when re-starting. Squid 
forced to scan the entire cache disk object by object if the swap.state 
is not written out on shutdown.


 It is also trading some delay period waiting for existing client 
connections to complete, preventing client browsers (or downstream 
caches!) receiving corrupted content. Or just as bad, allowing Squid 
time to close the open disk files, preventing local cache corruption.


NP: we are slowly auditing the things which can be done immediately 
without the wait (like dropping idle connections, helpers etc fast). 
Assistance welcome pointing them out for fixing.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Amos Jeffries

On 09/06/11 23:24, Tom Tux wrote:

Hi

I need to verify, if I get a mail when squid dies (I put a correct
mailaddress in cache_mgr-directive). Is there a way/hack to force
squid to die? I tried several things like revoking permissions on the
the cache-dir, moving cache.log away... -  but squid is still alive.



The email dress is used on the error pages, to display an *optional* 
send an email to the admin URL.


Doing the abort emails automatically requires squid to hit an assertion 
failure, which triggers abort(). It slows down the recovery period. So 
external tools are preferred which watch the log stream for assertion 
failure: (I think that is the text in all squids).


 If you can find a way to reliably hit one of the assertions remotely, 
that is a DoS vulnerability and *we want to know* ASAP.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] multiple http_port names

2011-06-10 Thread Amos Jeffries

On 10/06/11 09:34, errno wrote:


I've got squid conf that looks a bit like the following snippet:

# ...
acl ip-192.168.1.2 myip 192.168.1.2

http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2

tcp_outgoing_address 192.168.1.2 ip-192.168.1
# ...


Question:  do those http_port directives need to have
unique 'name=' entries?


unique.


Or can they all share the
same name? Also - and perhaps more importantly,
is there any similar(ish) problems with the way I've
named the 'myip' acl the same as the http_port names?


myip is at the mercy of the interception lookups.

myportname only depends on what you put in squid.conf and which actual 
listening port the traffic arrives on.


The ACL values can of course list several port name= values if you want 
several ports grouped to be handled the same.




I tested the above and things _appear_ to be working
as expected, but... I'm still wondering whether there
might be subtle issues with the above config.


The only subtle problem I have been able to think of is that wildcard 
listening ports are neutral/agnostic of the packets arriving on them.
 The name is fixed with the actual receiving port on Squid, not the 
client visible IP.
 This is by design, but can be confusing to some who don't realize how 
wildcard ports work. Particularly when dealing with multiple IPs spread 
over multiple protocols.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Helmut Hullen
Hallo, Tom,

Du meintest am 10.06.11:

 From squid.conf.documented:
 -+- SNIP ---
 #  TAG: cache_mgr
 #   Email-address of local cache manager who will receive
 #   mail if the cache dies.  The default is webmaster.

if the cache dies

That's another thing than a dying squid.

Viele Gruesse!
Helmut


Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Tom Tux
Mmm...

Now I were able to force squid to die (with gdb):

From cache.log:
--- SNIP ---
(squid)(death+0x37)[0x564d37]
/lib64/libpthread.so.0(+0xf5d0)[0x7f99ebe705d0]
FATAL: Received Segment Violation...dying.
2011/06/10 10:37:28.408| storeDirWriteCleanLogs: Starting...
2011/06/10 10:37:28.409| assertion failed: comm.cc:301: data ==
COMMIO_FD_READCB(fd)
--- SNAP ---

But there is no mail sent to the mailaddress defined on the
cache_mgr-directive. I made also a tcpdump on port 25 to check, if
there even a mail is sent. But no packets on port 25, no mail.





2011/6/10 Helmut Hullen hul...@t-online.de:
 Hallo, Tom,

 Du meintest am 10.06.11:

 From squid.conf.documented:
 -+- SNIP ---
 #  TAG: cache_mgr
 #       Email-address of local cache manager who will receive
 #       mail if the cache dies.  The default is webmaster.

 if the cache dies

 That's another thing than a dying squid.

 Viele Gruesse!
 Helmut



[squid-users] Squid and DNS

2011-06-10 Thread ML Alasta
Hi

I have two Squid which  make all requests DNS with the same source port.
This comportent entraine problems with my load balancer.
Are there a solution for random the source port of all DNS requests ?

Best Regard

Samuel


Re: [squid-users] multiple http_port names

2011-06-10 Thread errno
On Friday, June 10, 2011 02:20:49 AM errno wrote:
 * the single instance listening on multiple ip:port http_ports:
 192.168.1.2:80, 192.168.1.2:8080, 192.168.1.2:80 and
 192.168.1.2:8080


Damn, that should have said:

* the single instance listening on multiple ip:port http_ports:
192.168.1.2:80, 192.168.1.2:8080, 192.168.2.2:80 and
192.168.2.2:8080


... e.g. 192.168.1.2 and 192.168.2.2



Re: [squid-users] A way/hack to force squid to die?

2011-06-10 Thread Amos Jeffries

On 10/06/11 20:41, Tom Tux wrote:

Mmm...

Now I were able to force squid to die (with gdb):

 From cache.log:
--- SNIP ---
(squid)(death+0x37)[0x564d37]
/lib64/libpthread.so.0(+0xf5d0)[0x7f99ebe705d0]
FATAL: Received Segment Violation...dying.
2011/06/10 10:37:28.408| storeDirWriteCleanLogs: Starting...
2011/06/10 10:37:28.409| assertion failed: comm.cc:301: data ==
COMMIO_FD_READCB(fd)
--- SNAP ---

But there is no mail sent to the mailaddress defined on the
cache_mgr-directive. I made also a tcpdump on port 25 to check, if
there even a mail is sent. But no packets on port 25, no mail.



Hmm,
 - check for a file /tmp/squid-XX
 - check that the command line:
   $mail_program $cache_mgr /tmp/squid-XX
 works as per the mail_program directive requirements.
 - check that your mail system accepts email from 
squid@$unique_hostname and squid@$visible_hostname


 ** Email spam filters is one of several reasons why hostname must be a 
FQDN with working rDNS.


Amos



2011/6/10 Helmut Hullenhul...@t-online.de:

Hallo, Tom,

Du meintest am 10.06.11:


 From squid.conf.documented:
-+- SNIP ---
#  TAG: cache_mgr
#   Email-address of local cache manager who will receive
#   mail if the cache dies.  The default is webmaster.


if the cache dies

That's another thing than a dying squid.

Viele Gruesse!
Helmut




--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] Squid and DNS

2011-06-10 Thread Amos Jeffries

On 10/06/11 21:10, ML Alasta wrote:

Hi

I have two Squid which  make all requests DNS with the same source port.
This comportent entraine problems with my load balancer.
Are there a solution for random the source port of all DNS requests ?


Squid choses a random DNS port every startup, and should also on every 
reconfigure as well, but I have not checked that.


Getting two to pick the same number out of 64K possibilities is 
remarkable coincidence. Unless you have patches causing it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] multiple http_port names

2011-06-10 Thread Amos Jeffries

On 10/06/11 21:20, errno wrote:

On Friday, June 10, 2011 12:07:49 AM Amos Jeffries wrote:

On 10/06/11 09:34, errno wrote:

I've got squid conf that looks a bit like the following snippet:

# ...
acl ip-192.168.1.2 myip 192.168.1.2

http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2

tcp_outgoing_address 192.168.1.2 ip-192.168.1
# ...


Question:  do those http_port directives need to have
unique 'name=' entries?


unique.


I stand corrected on this. They MAY bet set with matching names.




Or can they all share the
same name? Also - and perhaps more importantly,
is there any similar(ish) problems with the way I've
named the 'myip' acl the same as the http_port names?


myip is at the mercy of the interception lookups.

myportname only depends on what you put in squid.conf and which actual
listening port the traffic arrives on.



Well one thing that occurred is that I at first was using
myportname rather than myip for the acl in question -
but when doing so, all traffic appeared to be comming
from the server's primary ip addr (in this case, 192.168.1.1)
rather than what I intended as specified by tcp_outgoing_address -
in other words, the following (with a bit more config added for
context):

# ...
# 192.168.1.2
acl ip-192.168.1.2 myportname ip-192.168.1.2
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2
tcp_outgoing_address 192.168.1.2 ip-192.168.1.2

# 192.168.2.2
acl ip-192.168.2.2 myportname ip-192.168.2.2
http_port 192.168.2.2:80 name=ip-192.168.2.2
http_port 192.168.2.2:8080 name=ip-192.168.2.2
tcp_outgoing_address 192.168.2.2 ip-192.168.2.2
# ...


Using the above, tcp_outgoing_address did not work as
expected/intended: using a tool such as http://www.whatismyip.com/ ,
showed 192.168.1.1 in all cases, regardless of which
http_port/myportname the client originated from.


whatismyip.com uses far more than the TCP/IP address to determine what 
the IP is.


To accurately judge whether the packet IP is correct you need to check 
the packets as they flow. tcpdump -s 0 -w file and wireshark to read 
the resulting packet dump is the easiest.


Failing that a netstat dump of the TCP connections.

Failing that a cache.log ALL,5 trace has the details buried in it somewhere.




Switching from the above, to:

# ...
# 192.168.1.2
acl ip-192.168.1.2 myip 192.168.1.2
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2
tcp_outgoing_address 192.168.1.2 ip-192.168.1.2

# 192.168.2.2
acl ip-192.168.2.2 myip 192.168.2.2
http_port 192.168.2.2:80 name=ip-192.168.2.2
http_port 192.168.2.2:8080 name=ip-192.168.2.2
tcp_outgoing_address 192.168.2.2 ip-192.168.2.2
# ...

... behaved as intended:  when clients went through
the http_port listener 192.168.2.2:80, the tcp_outgoing_address
worked as expected, wherein http://www.whatismyip.com
displayed 192.168.2.2 rather than 192.168.1.1.


* whatismyip.com is a global website outside on The Internet
* 192.168.* is a rfc1918 IP space
* rfc1918  IP are not used on The Internet

Ergo the Squid outgoing IP address is not what is being displayed by 
that website.


There are 3 layers of protocols above IP where the address can be found. 
I think whatismyip uses only the HTTP layer ones. The X-Forwarded-For 
and Via: headers added by Squid to provide UID and looping detection.


When the proxy is working properly whatismyip.com should display the 
nicely anonymous rfc1918 (192.168.1.250) or whatever the client browser 
has been assigned by your LAN.





Hope that makes sense; to rephrase/summarize:

* squid server's main/primary IP:  192.168.1.1

* one instance of squid running;

* the single instance listening on multipleip:port  http_ports:
192.168.1.2:80, 192.168.1.2:8080, 192.168.1.2:80 and
192.168.1.2:8080

results:

~ first example, using:
acl ip-192.168.1.2 myportname ip-192.168.1.2
and:
acl ip-192.168.2.2 myportname 192.168.2.2
... all cache traffic was detected as originating from server's
main/primary ip: 192.168.1.1 - and not from the specified
tcp_outgoing_address

~ BUT, second example, using:
acl ip-192.168.1.2 myip 192.168.1.2
and:
acl ip-192.168.2.2 myip 192.168.2.2
... all cache traffic was this time detected as originating
from the specified tcp_outgoing_address, as intended,
rather than from the squid server instances primary
ip addr (192.168.1.1).


snip duplicate of the same results


# ...
#
# don't work:
#acl ip-192.168.1.2 myportname ip-192.168.1.2
#
# works as expected/intended:
acl ip-192.168.1.2 myip 192.168.1.2
#
http_port 192.168.1.2:80 name=ip-192.168.1.2
http_port 192.168.1.2:8080 name=ip-192.168.1.2

tcp_outgoing_address 192.168.1.2 ip-192.168.1.2
#...


I'd like to understand what's going on, but the docs
I've read are not supplying any real information on
the matter.


Squid stores the local-endpoint IP address as local_ip and the http_port 
name= as myportname next to each other in an object recording the 
request 

[squid-users] Re: Squid 3.2 CONNECT not working with NTLM - logs

2011-06-10 Thread Alex Crow

Hi,

I noticed a few people have downloaded my logs - has anyone found 
anything useful? It does appear to be a bug to me, but if it is a change 
in behaviour I'd be happy of any hints to fix my config.


All the best

Alex




Further to my original post, I've found that NTLM auth does not work 
with CONNECT requests from clients in 3.2. I use a similar config to 
that I'm using in production in 2.7.


I have disabled ssl-bump and dynamic SSL until I can resolve this.

I do *not* have to add http_access allow CONNECT before http_access 
deny CONNECT !SSL_ports  in my production config, which seems correct 
as otherwise unauthorised users could use CONNECT.


Here is the config I am testing, which works fine for plain HTTP traffic:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 30
auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 30
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
external_acl_type nt_group ttl=20 children=40 %LOGIN 
/usr/lib/squid3/ext_wbinfo_group_acl

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
acl AuthorizedUsers proxy_auth REQUIRED
always_direct allow all
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access allow AuthorizedUsers
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow all AuthorizedUsers
http_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
cache_mem 2048 MB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir diskd /var/spool/squid3 36 64 512
debug_options ALL,3
coredump_dir /var/cache
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
dns_defnames on

However CONNECT requests do not work, the client sits forever waiting 
for data while squid does not match against AuthorizedUsers and 
attempts to return an error page:


Logs are available here:

http;//www.nanogherkin.com/nonssl_cache.log.gz
http;//www.nanogherkin.com/ssl_cache.log.gz

Any help much appreciated.

Regards

Alex





Re: [squid-users] multiple http_port names

2011-06-10 Thread errno
On Friday, June 10, 2011 07:33:07 AM Amos Jeffries wrote:
snip
  ... behaved as intended:  when clients went through
  the http_port listener 192.168.2.2:80, the tcp_outgoing_address
  worked as expected, wherein http://www.whatismyip.com
  displayed 192.168.2.2 rather than 192.168.1.1.
 
 * whatismyip.com is a global website outside on The Internet
 * 192.168.* is a rfc1918 IP space
 * rfc1918  IP are not used on The Internet
 
 Ergo the Squid outgoing IP address is not what is being displayed by
 that website.
 

Sorry for making things difficult with my example - I completely
failed to make it understood that the IPs in the config examples were
dummy values.  All the actual IPs in question under the real config 
are indeed within the publicly routable, non-reserved address space.

So, to simplify, I found that using:

acl aclname myportname someStringA
http_port ipaddr:port name=someStringA
tcp_outgoing_address ipaddr someStringA

... failed to cause the tcp_outgoing_address to work as 
expected/intended, while:

acl aclname myip ipaddr
http_port ipaddr:port name=someStringA
tcp_outgoing_address ipaddr someStringA

... worked as intended.

.. the only change being that I used the 'myip' acl rather
than the 'myportname' acl.



Re: [squid-users] some thougs about pushing objects

2011-06-10 Thread Luis Daniel Lucio Quiroz
Le vendredi 03 juin 2011 23:52:59, Eliezer Croitoru a écrit :
 well if you do want to push an object you can do it in a more elegant way:
 
 export http_proxy=http://localhost:3128  ; wgethttp://fqdn/object;
 
 and use it on a big site using recursive download and on ram drive.
 also another tip is to use --delete-after
 this will pull the file into the squid and write to to the ram... will take
 down some load from the disks.. it will leave the needed files on squid
 and not on the disk as file/s/
 
 i am using this method to preload and update specific files cache using
 crontab jobs.
 
 Regards
 Eliezer
 
 On 02/06/2011 17:20, Luis Daniel Lucio Quiroz wrote:
  just thinking
  
  if squid-purge tool can purge an specific object from cache by modifing
  the db cache db,
  
  is there a way to push an object like that with a tool?
  
  
  LD

Yes,

the only problem is that that object wont enter inmediatlly i shall ask for 
him several times according policy


[squid-users] squid 3.1.

2011-06-10 Thread Shenavandeh
Hi,

I have a squid installation that crashes twice a day under the load of
7 Mbps bandwidth with the following message in squid.out with no
specific traces in cache.log

Startup: Fri Jun 10 15:46:20
dying from an unhandled exception: !theConsumer
terminate called after throwing an instance of 'TextException'
  what():  !theConsumer
Startup: Fri Jun 10 19:55:29

It is compiled using following options:
 sbin]# ./squid -v
Squid Cache: Version 3.1.12.1
configure options:  '--enable-linux-netfilter'
'--enable-storeio=ufs,aufs' '--enable-poll'
'--enable-x-accelerator-vary' '--enable-follow-x-forwarded-for'
'--enable-ssl' '--enable-snmp' '--enable-removal-policies'
'--enable-gnuregex' '--with-large-files' '--enable-async-io'
'CFLAGS=-DNUMTHREADS=300' --with-squid=/root/squid-3.1.12.1
--enable-ltdl-convenience

the platform is as follows:

CPU :4 cores of  Intel(R) Xeon(R) CPU   E5504  @ 2.00GHz
RAM : 8GB
OS: CentOS 5.6 :
Kernel: Linux version 2.6.25 compiled with tproxy option.

the Squid configuration:

cache_mem 4000 MB

dead_peer_timeout 30 seconds
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

maximum_object_size 50 MB
maximum_object_size_in_memory 500 KB
minimum_object_size 0 KB

cache_replacement_policy heap LFUDA
memory_replacement_policy heap LRU

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow to_localhost


http_access allow localhost
http_access allow localnet
http_access allow to_localhost

http_access deny all

http_port 3128 tproxy

hierarchy_stoplist cgi-bin ?

cache_dir aufs /cache 24000 16 256

coredump_dir cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320

cache_effective_user squid
cache_effective_group squid

half_closed_clients off
buffered_logs on
client_db off
quick_abort_max 0 KB
quick_abort_min 0 KB
memory_pools off

cache_swap_high 95%
cache_swap_low 90
logfile_rotate 10%

visible_hostnameCache

quick_abort_min 32 KB
quick_abort_max 32 KB
quick_abort_pct 95

negative_ttl 3 minutes
positive_dns_ttl 6 hours

pipeline_prefetch on

acl snmpkey snmp_community public
snmp_port 3401
snmp_access allow snmpkey localhost
snmp_access deny all

refresh_pattern -i
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$
26 90% 260009 override-expire
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff|pdf|uxx)$ 26
90% 260009 override-expire


I would be most grateful if somebody helps me out.
Yours Faithfully,
---
Amir H Sh


[squid-users] wtf

2011-06-10 Thread MrNicholsB

squid is caching but refuses to serve cached content to my peers, its  =
a small home network and I need to get an estimate for, something that =
will likely take about an hour or two worth of work, to get my squid working
properly, its quite frustrating to see squids cache growing and 1gig memory
consumption, downloading http content like updates and game patches yet not
being served that cached content. its like a carrot dangling on a stick, and
I just apparently cant seem to get it. 



Re: [squid-users] wtf

2011-06-10 Thread Amos Jeffries

On 11/06/11 08:47, MrNicholsB wrote:

squid is caching but refuses to serve cached content to my peers, its =
a small home network and I need to get an estimate for, something that =
will likely take about an hour or two worth of work, to get my squid
working
properly, its quite frustrating to see squids cache growing and 1gig memory
consumption, downloading http content like updates and game patches yet not
being served that cached content. its like a carrot dangling on a stick,
and
I just apparently cant seem to get it.


Have you dropped hierarchy_stoplist from your config?
 Or are you using proxy-only in the peers configuration?

What does your complete config look like right now?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2