Re: [squid-users] questions with squid-3.1

2010-02-16 Thread Amos Jeffries

Jeff Peng wrote:

I just downloaded squid-3.1 source and compile  install it on an
ubuntu linux box.
There are two questions around it:

1. # sbin/squid -k kill
squid: ERROR: No running copy



-k shutdown is preferred if you can. kill is quite drastic and immediate.


Though squid is running there, but squid -k kill shows No running copy.
I think this is because squid can't  find its pid file,  so where is
the default pid file?


In this order (first :

1) whatever is in squid.conf.

2) Whatever was built with --with-pidfile=/path/squid.pid

3) $PREFIX/var/run/squid.pid  with whatever was defined in --prefix=...

4) /usr/local/squid/var/run/squid.pid




2. # sbin/squid -D
2010/02/16 15:02:41| WARNING: -D command-line option is obsolete.

-D is obsolete, why and what's the corresponding one to this option in
squid-3.1?


-D existed only to solve one problem which is now fully fixed.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] delay pool

2010-02-16 Thread Amos Jeffries

Adnan Shahzad wrote:

Thanks for reply

But how can I limit speed? I have 15 MB Internet speed, and around 1200 
Clients mean I want to give at least around 512 KB each client

Can you help me how delay pool help me in this regard



http://wiki.squid-cache.org/Features/DelayPools covers the feature.

You say that in a very strange way; limit speed =?= give at least.
 The lower limit for speed will always be zero, and Squid can only 
affect the upper limit (cap).


15*1024 / 512 == 30 clients at your minimum required speed. With 1200 to 
service you better cross your fingers and hope for the best. :)



I really don't think you need to worry about this.  Squid will manage 
the speed relatively evenly between all simultaneously connected 
clients. Regardless of the number.


The only way you are going to be able to guarantee that minimum speed 
will be to turn clients way once you get more than 30 simultaneous 
connections. Which is perhapse wore than slowing the existing ones down.


You could fake it a bit by using the maxconn ACL type to limit how many 
simultaneous connections an IP can have.


Amos



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, February 16, 2010 10:17 AM

To: squid-users@squid-cache.org
Subject: Re: [squid-users] delay pool

Adnan Shahzad wrote:

Dear All,

I want to configure Per user quota, Mean 2 GB per day internet access. Can I do 
it with delay pools? But in delay pool how And my 2nd question is delay 
pool bucket is for day or for week or month?


Delay pools works in seconds. Being old code it's also got some numeric 32-bit 
limits hanging around.

What you can do with it is assign a per-user bandwidth speed. Squid will police 
it for your HTTP traffic.

Quota stuff is quite hard in squid and still requires some custom code using helpers to manage the bandwidth used and do all the accounting. 
Squid only does allow/deny control at that point.


Amos
--
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
   Current Beta Squid 3.1.0.16



--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


RE: [squid-users] RE: images occasionally don't get through

2010-02-16 Thread Folkert van Heusden
  To help the debugging I also found an url that is accessible to
everyone:
  
  failed:
  --
  192.168.0.90 - - [12/Feb/2010:15:28:21 +] GET
  http://www.ibm.com/common/v15/main.css HTTP/1.0 200 10015
 
http://www-03.ibm.com/systems/hardware/browse/linux/?c=serversintron=Linux
  2001t=ad Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1;
Trident/4.0;
  InfoPath.2; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR
3.5.30729)
  TCP_MEM_HIT:DIRECT

 http://redbot.org/?uri=http%3A%2F%2Fwww.ibm.com%2Fcommon%2Fv15%2Fmain.css
 In short: The website is screwed.
 1) The resource doesn't send Vary consistently.
 2) The ETag doesn't change between representations.

But is there a way around this? Maybe always direct? or no_cache?



smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] RE: images occasionally don't get through

2010-02-16 Thread Amos Jeffries

Folkert van Heusden wrote:

To help the debugging I also found an url that is accessible to

everyone:

failed:
--
192.168.0.90 - - [12/Feb/2010:15:28:21 +] GET
http://www.ibm.com/common/v15/main.css HTTP/1.0 200 10015


http://www-03.ibm.com/systems/hardware/browse/linux/?c=serversintron=Linux

2001t=ad Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1;

Trident/4.0;

InfoPath.2; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR

3.5.30729)

TCP_MEM_HIT:DIRECT



http://redbot.org/?uri=http%3A%2F%2Fwww.ibm.com%2Fcommon%2Fv15%2Fmain.css
In short: The website is screwed.
1) The resource doesn't send Vary consistently.
2) The ETag doesn't change between representations.


But is there a way around this? Maybe always direct? or no_cache?



Oops, sorry.  cache deny  will stop your Squid from participating in 
the problem.


Still, contact the webmaster. They should be able to do much better.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] questions with squid-3.1

2010-02-16 Thread Jeff Peng
On Tue, Feb 16, 2010 at 5:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:


 1) whatever is in squid.conf.

 2) Whatever was built with --with-pidfile=/path/squid.pid

 3) $PREFIX/var/run/squid.pid  with whatever was defined in --prefix=...

 4) /usr/local/squid/var/run/squid.pid



Thanks Amos.
Then I found make install didn't create a $PREFIX/var/run directory
in squid-3.1.
That means after installation, I have to create the dir of
$PREFIX/var/run by hand.

-- 
Jeff Peng
Email: jeffp...@netzero.net
Skype: compuperson


Re: [squid-users] SquidClamAV generates twice traffic

2010-02-16 Thread Henrik K
On Tue, Feb 16, 2010 at 03:25:24AM -0800, davefu wrote:
 
 Is there a way to avoid this double traffic generation?

Redirector based AV scanners are flawed and inefficient by design.

Use some sane package like HAVP or C-ICAP. Google for them.



[squid-users] Re: SquidClamAV generates twice traffic

2010-02-16 Thread davefu

Ok, I'll have a look. Thanks for the quick reply!
-- 
View this message in context: 
http://n4.nabble.com/SquidClamAV-generates-twice-traffic-tp1557220p1557237.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] SSLBump, help to configure for 3.1.0.16

2010-02-16 Thread Matus UHLAR - fantomas
On 14.02.10 18:30, Andres Salazar wrote:
 Iam trying to configure SSLbump so that I can use squid in transparent
 mode and redirect with iptables/pf port 443 and 80 to squid.

Are you aware of all security concerns when intercepting HTTPS connections?

...I just wonder when will first proactive admin (or someone from his managers) 
sent
to prison because of breaking into users connections.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
One World. One Web. One Program. - Microsoft promotional advertisement
Ein Volk, ein Reich, ein Fuhrer! - Adolf Hitler


Re: [squid-users] cache manager access from web

2010-02-16 Thread Matus UHLAR - fantomas
  On 14.02.10 01:32, J. Webster wrote:
  Would that work with:
  http_access deny manager CONNECT !SSL_ports

 On Mon, 15 Feb 2010 15:32:30 +0100, Matus UHLAR - fantomas
 uh...@fantomas.sk wrote:
  no, the manager is not fetched by CONNECT request (unless something is
  broken).
  
  you need https_port directive and acl of type myport, then allow
  manager only on the https port. that should work.
  
  note that you should access manager directly not using the proxy.

On 16.02.10 13:59, Amos Jeffries wrote:
 You may (or may not) hit a problem after trying that because the cache mgr
 access uses its own protocol 
 cache_object:// not htps://.  An SSL tunnel with mgr access going through
 it should not have that problem but one never knows.

but it connect to standard HTTP port, right?

I think that the problem itself lies in cachemgr.cgi not being able to
connect via SSL
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Windows 2000: 640 MB ought to be enough for anybody


[squid-users] [SOLVED] Re: [squid-users] Fwd: squid_ldap_auth with two or more domain-controllers?

2010-02-16 Thread Tom Tux
With the parameter -c [seconds] (on the ldap-helper), I can specify,
how long the first domain-controller should tried to be contacted,
before the second one will tried to reach.

Regards,
Tom



2010/2/5 Tom Tux tomtu...@gmail.com:
 I can provide more than one server, but if the first one is not
 reachable, i'll get a timeout. Can I specify, how long the ldap-helper
 has to wait, until it tries to connect to the second or third
 ldap-server?
 Thanks.

 2010/1/29 Alejandro Bednarik alejan...@xtech.com.ar:
 Try with something like this.

 /usr/lib/squid/squid_ldap_group -h server1 -h server2 -h server3

 Cheers.

 2010/1/29 Tom Tux tomtu...@gmail.com:
 Hi all,

 Any hints about this question?
 Thanks a lot.


 -- Forwarded message --
 From: Tom Tux tomtu...@gmail.com
 Date: 2010/1/11
 Subject: squid_ldap_auth with two or more domain-controllers?
 To: squid-users squid-users@squid-cache.org


 I configured our squid to authenticate with squid_ldap_auth 
 squid_ldap_group against an active-directory. With the parameter -h
 [ip-address of domain-controller], I'm able to define one ore more of
 our ldapservers (domain-controllers) for querying. But the setting
 with the specified failover-dc seems not really to work.
 How can I define a 2nd or a third domain-controller, if the request to
 the first domain-controller fails? How can I define a query-timeout?
 Thanks a lot.
 Tom





[squid-users] Difference between Authenticate_ttl and auth_param basic credentialsttl ?

2010-02-16 Thread Tom Tux
Hi all,

I'm authentication with the ldap-helper squid_ldap_auth against an
active directory. I can specify two credentials-ttls:

One is possible in the auth_param-directive:
auth_param basic credentialsttl 2 hour

The other one looks like this:
authenticate_ttl 1 hour


What is the difference between this two options? Which option will be
used, when I use the squid_ldap_auth-helper?

Is the authenticate_cache_garbage_interval also possible, when I
authenticate aginst an active-directory? Or is this directive in this
case useless?

Thanks a lot for your help.
Tom


Re: [squid-users] BYPASSED acl allowedurls url_regex /etc/squid/url.txt , help?

2010-02-16 Thread Andres Salazar
Hello,

acl allowedurls dstdomain /etc/squid/url.txt  works better. However
now the problem is that its not evaluating https sites that use the
CONNECT method. So pretty much I can enter any https in the browser.

Is there anyway to control this?

Andres



On Sun, Feb 14, 2010 at 2:07 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Andres Salazar wrote:

 Hello,

 Iam using:

 acl allowedurls url_regex /etc/squid/url.txt
 and then only allowing localnet to access that acl.

 a.) If a user behind localnet types:
 http://www.facebook.com/@http://www.allowed.org/page.html  they are
 able to peak some content of the disallowed website facebook. Is it
 possible ot set the regex so that it is more strict and only matches
 if it is located at the beginning of the URL?

 The original line in the .txt file is: http://www.allowed.org/page.html


 http://www.gnu.org/software/emacs/manual/html_node/emacs/Regexps.html

 see: ^

 b.) Also, what would be the correct regex for something like this:
 http://*.google.com Obviously that doesnt match.


 Best to avoid regex for domain matching.

 Use:
  acl google dstdomain .google.com


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16



[squid-users] help please

2010-02-16 Thread David C. Heitmann

hello,

i get no connection to msn throw squid! (client)
my iptables are stopped!
can somebody help me please..


windows live messenger 2009
squid 3.1.0.16
iptables 2.1.4 (deactivate for testing)

squid.conf konfiguration:


http://debianforum.de/forum/viewtopic.php?f=18t=118306#
   |# ICQ
   acl icq dstdomain .icq.com
   http_access allow icq

   # MSN Messenger
   acl msn urlpath_regex -i gateway.dll
   acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
   acl msn1 req_mime_type application/x-msn-messenger
   http_access allow msnd
   http_access allow msn
   http_access allow msn1|



iptables config


http://debianforum.de/forum/viewtopic.php?f=18t=118306#
   |$IPTABLES -A INPUT -i $LAN -p tcp --dport 1863 -j ACCEPT
   $IPTABLES -A INPUT -i $LAN -p udp --dport 1863 -j ACCEPT

   $IPTABLES -A OUTPUT -p udp --dport 1863 -j ACCEPT
   $IPTABLES -A OUTPUT -p tcp --dport 1863 -j ACCEPT|



der gute access log von squid


http://debianforum.de/forum/viewtopic.php?f=18t=118306#
   |1266321898.316417 lafoffice02.speedport.ip TCP_MISS/200 5289
   POST http://gateway.messenger.hotmail.com/gateway/gateway.dll?
   onkeldave DIRECT/65.54.52.62 application/x-msn-messenger
   1266321898.598273 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321900.583265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321902.580265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321904.585265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321906.582265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321908.579264 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321910.598279 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger|


thanks dave


Re: [squid-users] SSLBump, help to configure for 3.1.0.16

2010-02-16 Thread K K
On Tue, Feb 16, 2010 at 7:17 AM, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
 On 14.02.10 18:30, Andres Salazar wrote:
 Iam trying to configure SSLbump so that I can use squid in transparent
 mode and redirect with iptables/pf port 443 and 80 to squid.

Why transparent?


 Are you aware of all security concerns when intercepting HTTPS connections?

 ...I just wonder when will first proactive admin (or someone from his 
 managers) sent
 to prison because of breaking into users connections.

Laws vary by country.  At least in the US, SSL-Intercepting admins are
much more likely to face civil liability than any sort of criminal
charge.  So no prison, just bankruptcy.

With the requirement to load a public key on the machine being
intercepted, generally this is only deployed in situations where the
owner of the proxy also already owns the user machine.


I'm using a commercial tool which gets around the headaches and legal
issues by inspecting the HTTPS outbound data on the client, before it
gets encrypted.   This agent only works with IE/Firefox.


[squid-users] Tunneling HTTPS and Grant access

2010-02-16 Thread Carlos Lopez
Hi all,

I'am new to squid and I was wondering if it is possible to tunnel https request 
from authenticated users and then via script block/allow access to https 
address, but depending of what's the result of the script, let's say:

user1 and user2

user1, have access to check yahoo mail only and do internet bank accounting for 
only one specific site, so he/she may need https port to be open (https and 
http are blocked on the firewall), but at the same time do some filtering, to 
restrict him/her to navigate for example Adult sites.

user2, got access only to navigate through port http and also do some filters 
via script (for example, block access to webchat links)

Thanks for your help.

Carlos.







  

¡Obtén la mejor experiencia en la web!
Descarga gratis el nuevo Internet Explorer 8. 
http://downloads.yahoo.com/ieak8/?l=e1



[squid-users] POST denied?

2010-02-16 Thread Bill Stephens
All,

I'm attempting to configure squid to proxy my requests to a Web
Service. I can access via a GET request in my browser but attempting
to submit a request via Java that has been configured to use squid as
my proxy:

Execute:Java13CommandLauncher: Executing
'/usr/lib/jvm/java-1.5.0-sun-1.5.0.18/jre/bin/java' with arguments:
'-Djava.endorsed.dirs=extensions/endorsed'
'-Dhttp.proxyPort=3128'
'-Dhttp.proxyHost=127.0.0.1'

1266334195.708  1 127.0.0.1 TCP_DENIED/411 1949 POST
http://cadsr-dataservice.nci.nih.gov:80/wsrf/services/cagrid/CaDSRDataService
- NONE/- text/html

Thinking that I had messed up my config, I returned to the out of the
box squid.conf and I get the same error.

Thoughts?


[squid-users] Reverse proxy Basic Accelerator

2010-02-16 Thread don Paolo Benvenuto
Hi!

I'm trying to configure a basic reverse proxy accelerator for mediawiki,
and I found the instructions at
http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

but unfortunately they don't work with squid 2.7.

When trying to run squid I get:

ACL name 'all' not defined!
FATAL bungled squid.conf line 6: cache_peer_access wiki deny all

it seems that the cache_peer_access syntax has changed from 2.6 to 2.7,
but looking for light in the docs I couldn't figure how should I change
that config file.

Any hint?

Thank you!

-- 
don Paolo Benvenuto

http://parrocchialagaccio.it
è il sito della parrocchia
aggiornato quasi tutti i giorni:
foto, notizie, calendario dei momenti di vita parrocchiale

Contribuisci a wikipedia, l'enciclopedia della quale tu sei l'autore e
il revisore: http://it.wikipedia.org

Cathopedia, l'enciclopedia cattolica: http://it.cathopedia.org

Materiale per la pastorale: http://www.qumran2.net



[squid-users] Re: SSLBump, help to configure for 3.1.0.16

2010-02-16 Thread Andres Salazar
Hello,

Iam still having issues with SSLBump .. apparently iam now getting
this error when I visit an https site with my browser explicity
configured to use the https_port  .

2010/02/16 14:31:14| clientNegotiateSSL: Error negotiating SSL
connection on FD 8: error:1407609B:SSL
routines:SSL23_GET_CLIENT_HELLO:https proxy request (1/-1)

Below is my sanitized config.


acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localhost src ::1/128
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl to_localhost dst ::1/128
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128
https_port 3129  sslBump cert=/usr/local/squid/etc/server.crt
key=/usr/local/squid/etc/server.key
always_direct allow all
visible_hostname proxy1.komatsu.ca
unique_hostname proxy1.komatsu.ca
hierarchy_stoplist cgi-bin ?
coredump_dir /usr/local/squid/var/cache
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

Notice i didnt use transparent, because I wanted to test it first
without doing it transparent.

I used this to generate the crt and key:
openssl genrsa -out server.key 1024
openssl req -new -key server.key -out /tmp/server.csr
openssl x509 -req -days 1825 -in /tmp/server.csr -signkey server.key
-out server.crt

Also.. in regards to the transparent option.. Is it ok if I redirect
port 443 and 80 from the NAT box to another box on the network via
iptables? Or should both squid and the NAT gateway be in the same
network?


Thanks

Andres


[squid-users] Squid restarts because of icap problem

2010-02-16 Thread akinf

In squid logs , i get the following error. I configured squid to connect to a
java based applicaiton 
through icap. But squid gets error for some requests and restarts when it
gets the following errro. 
Please help 

assertion failed: BodyPipe.cc:339: checkout.checkedOutSize == currentSize 
2010/02/15 17:45:27| Starting Squid Cache version 3.1.0.8 for
x86_64-unknown-linux-gnu... 
-- 
View this message in context: 
http://n4.nabble.com/Squid-restarts-because-of-icap-problem-tp1557855p1557855.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: slow performance 1 user : 3.1.0.16 on default config

2010-02-16 Thread Andres Salazar
Amos,

Odd enough the same config and same squid/OS build in another box
worked without any problems. Something happened in that Dual Atom 1GB
box that squid didnt like.


Below is the output of cache.log on the fast machine FYI in case there
is some kind of obscure bug.


2010/02/16 14:39:13| Starting Squid Cache version 3.1.0.16 for
i686-pc-linux-gnu...
2010/02/16 14:39:13| Process ID 19843
2010/02/16 14:39:13| With 1024 file descriptors available
2010/02/16 14:39:13| Initializing IP Cache...
2010/02/16 14:39:13| DNS Socket created at [::], FD 5
2010/02/16 14:39:13| Adding domain my.domain from /etc/resolv.conf
2010/02/16 14:39:13| Adding nameserver 4.2.2.2 from /etc/resolv.conf
2010/02/16 14:39:13| Adding nameserver 4.2.2.1 from /etc/resolv.conf
2010/02/16 14:39:13| Adding nameserver 196.40.3.10 from /etc/resolv.conf
2010/02/16 14:39:13| Adding nameserver 196.40.3.13 from /etc/resolv.conf
2010/02/16 14:39:14| Unlinkd pipe opened on FD 10
2010/02/16 14:39:14| Store logging disabled
2010/02/16 14:39:14| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2010/02/16 14:39:14| Target number of buckets: 1008
2010/02/16 14:39:14| Using 8192 Store buckets
2010/02/16 14:39:14| Max Mem  size: 262144 KB
2010/02/16 14:39:14| Max Swap size: 0 KB
2010/02/16 14:39:14| Using Least Load store dir selection
2010/02/16 14:39:14| Set Current Directory to /usr/local/squid/var/cache
2010/02/16 14:39:14| Loaded Icons.
2010/02/16 14:39:14| Accepting  HTTP connections at [::]:3128, FD 11.
2010/02/16 14:39:14| HTCP Disabled.
2010/02/16 14:39:14| Squid modules loaded: 0
2010/02/16 14:39:14| Ready to serve requests.
2010/02/16 14:39:15| storeLateRelease: released 0 objects
2010/02/16 14:39:43| Preparing for shutdown after 4 requests
2010/02/16 14:39:43| Waiting 0 seconds for active connections to finish
2010/02/16 14:39:43| FD 11 Closing HTTP connection
2010/02/16 14:39:45| Shutting down...
2010/02/16 14:39:45| basic/auth_basic.cc(97) done: Basic
authentication Shutdown.
2010/02/16 14:39:45| Closing unlinkd pipe on FD 10
2010/02/16 14:39:45| storeDirWriteCleanLogs: Starting...
2010/02/16 14:39:45|   Finished.  Wrote 0 entries.
2010/02/16 14:39:45|   Took 0.00 seconds (  0.00 entries/sec).
CPU Usage: 0.039 seconds = 0.029 user + 0.010 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:3772 KB
Ordinary blocks: 3743 KB 16 blks
Small blocks:   0 KB  1 blks
Holding blocks:  2004 KB 10 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  28 KB
Total in use:5747 KB 152%
Total free:28 KB 1%
2010/02/16 14:39:45| Open FD READ/WRITE5 DNS Socket
2010/02/16 14:39:45| Open FD READ/WRITE8 Waiting for next request
2010/02/16 14:39:45| Open FD READ/WRITE9
googleads.g.doubleclick.net idle connection
2010/02/16 14:39:45| Open FD READ/WRITE   12 Waiting for next request
2010/02/16 14:39:45| Open FD READ/WRITE   13 www.google-analytics.com
idle connection
2010/02/16 14:39:45| Open FD READ/WRITE   14 mail.google.com idle connection
2010/02/16 14:39:45| Squid Cache (Version 3.1.0.16): Exiting normally.


Thank you for your kind Help Amos.

Andres

On Mon, Feb 15, 2010 at 11:07 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Andres Salazar wrote:

 Hello,

 This time we can see I followed the original config. A page like
 cnn.com takes about 60 seconds to load. Without the proxy it takes 10
 seconds.

 That sort of matches the relative number of parallel connections modern
 browsers will open to proxies vs to web servers.

 You need to read this:

 http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/

  

 Note that if you’re behind a proxy (at work, etc.) your download
 characteristics change. If web clients behind a proxy issued too many
 simultaneous requests an intelligent web server might interpret that as a
 DoS attack and block that IP address. Browser developers are aware of this
 issue and throttle back the number of open connections.

 In Firefox the network.http.max-persistent-connections-per-proxy setting has
 a default value of 4. If you try the Max Connections test page while behind
 a proxy it loads painfully slowly opening no more than 4 connections at a
 time to download 180 images. IE8 drops back to 2 connections per server when
 it’s behind a proxy, so loading the Max Connections test page shows an
 upperbound of 60 open connections. Keep this in mind if you’re comparing
 notes with others – if you’re at home and they’re at work you might be
 seeing different behavior because of a proxy in the middle.

 

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16



Re: [squid-users] help please

2010-02-16 Thread Amos Jeffries

David C. Heitmann wrote:

hello,

i get no connection to msn throw squid! (client)
my iptables are stopped!
can somebody help me please..


windows live messenger 2009
squid 3.1.0.16
iptables 2.1.4 (deactivate for testing)

squid.conf konfiguration:


http://debianforum.de/forum/viewtopic.php?f=18t=118306#
   |# ICQ
   acl icq dstdomain .icq.com
   http_access allow icq

   # MSN Messenger
   acl msn urlpath_regex -i gateway.dll
   acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
   acl msn1 req_mime_type application/x-msn-messenger
   http_access allow msnd
   http_access allow msn
   http_access allow msn1|



iptables config


http://debianforum.de/forum/viewtopic.php?f=18t=118306#
   |$IPTABLES -A INPUT -i $LAN -p tcp --dport 1863 -j ACCEPT
   $IPTABLES -A INPUT -i $LAN -p udp --dport 1863 -j ACCEPT

   $IPTABLES -A OUTPUT -p udp --dport 1863 -j ACCEPT
   $IPTABLES -A OUTPUT -p tcp --dport 1863 -j ACCEPT|



der gute access log von squid


http://debianforum.de/forum/viewtopic.php?f=18t=118306#
   |1266321898.316417 lafoffice02.speedport.ip TCP_MISS/200 5289
   POST http://gateway.messenger.hotmail.com/gateway/gateway.dll?
   onkeldave DIRECT/65.54.52.62 application/x-msn-messenger
   1266321898.598273 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321900.583265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321902.580265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321904.585265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321906.582265 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321908.579264 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger
   1266321910.598279 lafoffice02.speedport.ip TCP_MISS/200 178 POST
   http://gateway.messenger.hotmail.com/gateway/gateway.dll? onkeldave
   DIRECT/65.54.52.62 application/x-msn-messenger|


thanks dave


Your log trace shows that is _is_ working. 100%.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] NTLM Authentication and Connection Pinning problem

2010-02-16 Thread Jeff Foster
Henrik/Amos,

Did you get the tcpdumps? Is there anything else I can do to help
debug this problem?

Jeff F

2010/2/14 Jeff Foster
 I am sending 2 tcpdump files as attachments to you, Henrik, and Amos plus the
 mail list. I expect the mailing list will remove the attachments so I
 hope you both
 receive the attachments. If that doesn't work I can upload to a web
 server so you
 can download them.

 The dumps are:
  Stutt.3.cap - A squid 3.1.0.16 capture
  Stutt.4.cap - A squid 2.7.STABLE7 capture

 The previous email were discussing the 3.1.0.16 capture, but the 2.7 capture
 also has the same issue.


 Jeff F


 2010/2/13 Henrik Nordström
 Can you please extend the trace to include the following two pieces of
 information as well:

 * The response. Both status code, and in case of 407 if there is an
 NTLMSSP_CHALLLENGE blob or just the scheme name..

 * Who closes the connection first (FIN)

 lör 2010-02-13 klockan 14:17 -0600 skrev Jeff Foster:
 Client Packet summary
 No.  Time  SrcInfo
  7 0.001648  1916   GET http://simon/efms/ HTTP/1.0
  16 0.559067  1916   GET http://simon/efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
  21 0.752159  1916   GET http://simon/efms/ HTTP/1.0, NTLMSSP_AUTH, User: WG
  42 1.576078  1917   GET http://simon/efms/ HTTP/1.0
  65 1.961280  1917   GET http://simon/efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
  70 2.151384  1917   GET http://simon/efms/ HTTP/1.0, NTLMSSP_AUTH, User: WG
  85 2.991803  1918   GET http://simon/EFMS/efms.js HTTP/1.0
 144 3.370616  1918   GET http://simon/EFMS/efms.js HTTP/1.0, NTLMSSP_NEGOTIA
 157 3.560971  1918   GET http://simon/EFMS/efms.js HTTP/1.0, NTLMSSP_AUTH, U
 163 3.780493  1918   GET http://simon/EFMS/efms.css HTTP/1.0
 171 3.781469  1919   GET http://simon/Styles/perry_fix_font.css HTTP/1.0
 174 3.781643  1920   GET http://simon/Styles/forms.css HTTP/1.0
 179 3.782358  1921   GET http://simon/styles/dashboard.css HTTP/1.0
 195 3.969630  1918   GET http://simon/javascript/std.js HTTP/1.0
 207 4.161036  1919   GET http://simon/EFMS/efms.css HTTP/1.0, NTLMSSP_NEGOTI

 Server (upstream) packet summary
 No.  Time  SrcInfo
  12 0.369931  37156  GET /efms/ HTTP/1.0
  18 0.559496  37156  GET /efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
  23 0.752534  37156  GET /efms/ HTTP/1.0, NTLMSSP_AUTH, User: WGC\jfoste
  61 1.758489  37157  GET /efms/ HTTP/1.0
  67 1.961708  37157  GET /efms/ HTTP/1.0, NTLMSSP_NEGOTIATE
  72 2.152100  37157  GET /efms/ HTTP/1.0, NTLMSSP_AUTH, User: WGC\jfoste
 113 3.180079  37158  GET /EFMS/efms.js HTTP/1.0
 146 3.371116  37158  GET /EFMS/efms.js HTTP/1.0, NTLMSSP_NEGOTIATE
 159 3.561335  37158  GET /EFMS/efms.js HTTP/1.0, NTLMSSP_AUTH, User: WGC\jfo
 168 3.781256  37158  GET /EFMS/efms.css HTTP/1.0
 190 3.967221  37159  GET /Styles/perry_fix_font.css HTTP/1.0
 191 3.967513  37160  GET /Styles/forms.css HTTP/1.0
 192 3.967791  37161  GET /styles/dashboard.css HTTP/1.0
 197 3.970336  37158  GET /javascript/std.js HTTP/1.0
 210 4.161855  37161  GET /EFMS/efms.css HTTP/1.0, NTLMSSP_NEGOTIATE

 Jeff F





[squid-users] all traffic over squid an auth.

2010-02-16 Thread Christian Weiligmann
I have a problem, 
I would like to get all my questions from the internal network to the
internet over squid proxy, with using delegated authentication.
(SQL,NTLM...).
Is that possible? I know that the transparency function is not be able
to authenticate. But what can i do?
For example: Ipsec Connections, Openvpn connections and many other
client programs used for internet connections over squid. And i have to
log all the traffic with ip, username and password.

sorry for this stupid question, but i want to learn.







Re: [squid-users] all traffic over squid an auth.

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 20:53:52 +0100, Christian Weiligmann
christian.weiligm...@weiligmann-net.de wrote:
 I have a problem, 
 I would like to get all my questions from the internal network to the
 internet over squid proxy, with using delegated authentication.
 (SQL,NTLM...).
 Is that possible? I know that the transparency function is not be able
 to authenticate. But what can i do?
 For example: Ipsec Connections, Openvpn connections and many other
 client programs used for internet connections over squid. And i have to
 log all the traffic with ip, username and password.
 
 sorry for this stupid question, but i want to learn.

Well, you can't authenticate against the proxy itself while intercepting
the traffic. But there are all sorts of alternatives.

I recommend the one called WPAD or WPAD/PAC. It uses a PAC (proxy
auto-configuration) file to 'transparently' configure all the network
clients to use the proxy. Any client browser with their network proxy
settings turned to automatic will act like a regular proxy client without
any special configuration on the users part. You may use authentication
with these clients!
 
http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers#Fully_Automatically_Configuring_Browsers_for_WPAD
  http://wiki.squid-cache.org/Technology/WPAD

From your request I assume that non-login requests are not to be permitted
at all.

With WPAD going you can convert the interception requests into a captive
portal type setup. Where any requests arriving at it get sent to a custom
page (using deny_info and ACL) instructing the user how to setup their
browser to use the WPAD setting.

This may need to be phased in with an IP range ACL slowly expanding across
the network to get clients updating their settings on a controlled gradual
basis. Watching the logs closely for programs which may need special admin
attention for any reason.

Amos



Re: [squid-users] cache manager access from web

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 14:20:15 +0100, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
  On 14.02.10 01:32, J. Webster wrote:
  Would that work with:
  http_access deny manager CONNECT !SSL_ports
 
 On Mon, 15 Feb 2010 15:32:30 +0100, Matus UHLAR - fantomas
 uh...@fantomas.sk wrote:
  no, the manager is not fetched by CONNECT request (unless something
is
  broken).
  
  you need https_port directive and acl of type myport, then allow
  manager only on the https port. that should work.
  
  note that you should access manager directly not using the proxy.
 
 On 16.02.10 13:59, Amos Jeffries wrote:
 You may (or may not) hit a problem after trying that because the cache
 mgr
 access uses its own protocol 
 cache_object:// not htps://.  An SSL tunnel with mgr access going
through
 it should not have that problem but one never knows.
 
 but it connect to standard HTTP port, right?

Yes.

 
 I think that the problem itself lies in cachemgr.cgi not being able to
 connect via SSL

Yes. This should probably be reported as an enhancement bug so we don't
forget it.
CacheMgr is due for a bit more of a cleanup someday, so it would be a
shame to miss this out.

Amos


Re: [squid-users] BYPASSED acl allowedurls url_regex /et c/squid/url.txt , help?

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 08:35:15 -0600, Andres Salazar ndrsslz...@gmail.com
wrote:
 Hello,
 
 acl allowedurls dstdomain /etc/squid/url.txt  works better. However
 now the problem is that its not evaluating https sites that use the
 CONNECT method. So pretty much I can enter any https in the browser.
 
 Is there anyway to control this?

dstdomain is so basic it should work seamlessly between HTTP and HTTPS.
The hostname exists in both.

Can we see your complete http_access lines in exact order please?
and a copy of any include files such as that url.txt.
If it has confidential info then a private email to me would be fine.

Amos


Re: [squid-users] Difference between Authenticate_ttl and auth_param basic credentialsttl ?

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 14:51:19 +0100, Tom Tux tomtu...@gmail.com wrote:
 Hi all,
 
 I'm authentication with the ldap-helper squid_ldap_auth against an
 active directory. I can specify two credentials-ttls:
 
 One is possible in the auth_param-directive:
 auth_param basic credentialsttl 2 hour
 
 The other one looks like this:
 authenticate_ttl 1 hour
 
 
 What is the difference between this two options? Which option will be
 used, when I use the squid_ldap_auth-helper?
 
 Is the authenticate_cache_garbage_interval also possible, when I
 authenticate aginst an active-directory? Or is this directive in this
 case useless?
 
 Thanks a lot for your help.
 Tom

All the options you mention always are applied. They apply to different
parts of the auth sequencing.

 * authenticate_cache_garbage_interval - how often squid checks its cached
user details and discards old ones. This happens regardless of visitors.
Squid will also do this for each login at the time of use, so garbage
collection only prevents buildups of memory waste where user is not active
for some time.

 * authenticate_ttl - how often a user is questioned for their
credentials. To verify that the machine still is the same user.

 * credentialsttl - how long to cache the credentials received with their
valid/invalid state.

If credentialsttl is shorter than authenticate_ttl then the stored
credentials will be re-verified more often than the client is asked to
update them. If they fail at any time, the client will be re-challenged on
next request.

If credentialsttl is longer than authenticate_ttl then the client will be
asked to update its credentials more often (re-validation will only occur
if they actually change).


The defaults are that squid checks the background auth system at most
every hour to verify its stored credentials and only trouble the client
every 2 hours.

Amos


Re: [squid-users] Tunneling HTTPS and Grant access

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 07:42:09 -0800 (PST), Carlos Lopez
the_spid...@yahoo.com wrote:
 Hi all,
 
 I'am new to squid and I was wondering if it is possible to tunnel https
 request from authenticated users and then via script block/allow access
to
 https address, but depending of what's the result of the script, let's
say:
 
 user1 and user2
 
 user1, have access to check yahoo mail only and do internet bank
 accounting for only one specific site, so he/she may need https port to
be
 open (https and http are blocked on the firewall), but at the same time
do
 some filtering, to restrict him/her to navigate for example Adult sites.
 
 user2, got access only to navigate through port http and also do some
 filters via script (for example, block access to webchat links)
 

Yes. HTTPS traffic has access to the destination domain name and port by
themselves.

If some combo of the existing ACL types does not match what you want
cleanly, look at external_acl_type to call some more complicated helper
script.
  http://www.squid-cache.org/Doc/config/external_acl_type/

It's controlled using http_access same as any other request. Just include
CONNECT at the start of all the HTTPS-specific rules. Like so:
  http_access allow or deny CONNECT ...

For example, the default security rule:
  http_access deny CONNECT !SSL_ports
... blocks all non-SSL ports from being accessed via the tunnel.
(I'd advise placing your HTTPS rules below that one.)

Amos



Re: [squid-users] POST denied?

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 10:45:50 -0500, Bill Stephens grape...@gmail.com
wrote:
 All,
 
 I'm attempting to configure squid to proxy my requests to a Web
 Service. I can access via a GET request in my browser but attempting
 to submit a request via Java that has been configured to use squid as
 my proxy:
 
 Execute:Java13CommandLauncher: Executing
 '/usr/lib/jvm/java-1.5.0-sun-1.5.0.18/jre/bin/java' with arguments:
 '-Djava.endorsed.dirs=extensions/endorsed'
 '-Dhttp.proxyPort=3128'
 '-Dhttp.proxyHost=127.0.0.1'
 
 1266334195.708  1 127.0.0.1 TCP_DENIED/411 1949 POST

http://cadsr-dataservice.nci.nih.gov:80/wsrf/services/cagrid/CaDSRDataService
 - NONE/- text/html
 
 Thinking that I had messed up my config, I returned to the out of the
 box squid.conf and I get the same error.
 
 Thoughts?

RFC 2616 (HTTP specification):
  411 Length Required

Your POST requests is missing the Content-Length: header with number of
bytes being posted.


It's also a bit weird to be sending :80 in the URL for http://.
Valid, but uncommon and may cause issues somewhere.

Amos


Re: [squid-users] Reverse proxy Basic Accelerator

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 20:21:52 +0100, don Paolo Benvenuto
paolobe...@gmail.com wrote:
 Hi!
 
 I'm trying to configure a basic reverse proxy accelerator for mediawiki,
 and I found the instructions at
 http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator
 
 but unfortunately they don't work with squid 2.7.
 
 When trying to run squid I get:
 
 ACL name 'all' not defined!
 FATAL bungled squid.conf line 6: cache_peer_access wiki deny all
 
 it seems that the cache_peer_access syntax has changed from 2.6 to 2.7,
 but looking for light in the docs I couldn't figure how should I change
 that config file.
 
 Any hint?


Nothing has changed.
 You simply have not defined the all ACL in squid.conf before the point
you tried to use it.

Add this at the start of the config:
  acl all src all

(if you have a later entry for acl all ... you can remove that later one
and avoid some startup warnings).

Amos


Re: [squid-users] How can I cache most content

2010-02-16 Thread Landy Landy

Thanks for replying.

As Marcus suggested, I added the following lines to squid.conf:

acl blockanalysis01 dstdomain .scorecardresearch.com .google-analytics.com
acl blockads01  dstdomain .rad.msn.com ads1.msn.com ads2.msn.com 
ads3.msn.com ads4.msn.com
acl blockads02  dstdomain .adserver.yahoo.com 
pagead2.googlesyndication.com
http_access deny blockanalysis01
http_access deny blockads01
http_access deny blockads02

But, tried testing blocking other sites to see how squid handled it:
acl blockads03 dstdomain .msn.com
acl blockads04 dstdomain .testsite.com

and I'm able to access these sites. Shouldn't these be blocked?



  


[squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
I'm starting to lose my mind here. New hardware test bed including a
striped set of SSD's

Same hardware, controller etc as my other squid servers, just added
SSD's for testing. I've used default threads and I've built with 24
threads. And what's blowing my mind is I get the error immediately
upon startup of my cache server (what?) and when I start banging on it
with over 75 connections p/sec..

The issue with the well if you only see a few ignore, is that I
actually get 500 errors when this happens. So something is going on
and I'm not sure what.

No Load
No I/O wait.

Fedora 12
Squid2.7Stable7
Dual Core
6gigs of ram
Striped SSD's

And did I mention no wait and zero load when this happens?

configure options:  '--host=i686-pc-linux-gnu'
'--build=i686-pc-linux-gnu' '--target=i386-redhat-linux'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info'
'--exec_prefix=/usr' '--libexecdir=/usr/lib/squid'
'--localstatedir=/var' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--disable-dependency-tracking'
'--enable-arp-acl' '--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,negotiate'
'--enable-basic-auth-helpers=NCSA,PAM,getpwnam,SASL'
'--enable-digest-auth-helpers=password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,session,unix_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-ident-lookups'
'--with-large-files' '--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--with-aio' '--with-maxfd=16384' '--with-dl'
'--with-openssl' '--with-pthreads' '--with-aufs-threads=24'
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu'
'target_alias=i386-redhat-linux' 'CFLAGS=-fPIE -Os -g -pipe
-fsigned-char -O2 -g -march=i386 -mtune=i686' 'LDFLAGS=-pie'


2010/02/16 14:15:49| Starting Squid Cache version 2.7.STABLE7 for
i686-pc-linux-gnu...
2010/02/16 14:15:49| Process ID 19222
2010/02/16 14:15:49| With 4096 file descriptors available
2010/02/16 14:15:49| Using epoll for the IO loop
2010/02/16 14:15:49| Performing DNS Tests...
2010/02/16 14:15:49| Successful DNS name lookup tests...
2010/02/16 14:15:49| DNS Socket created at 0.0.0.0, port 52964, FD 6

2010/02/16 14:15:49| User-Agent logging is disabled.
2010/02/16 14:15:49| Referer logging is disabled.
2010/02/16 14:15:49| Unlinkd pipe opened on FD 10
2010/02/16 14:15:49| Swap maxSize 32768000 + 102400 KB, estimated
2528492 objects
2010/02/16 14:15:49| Target number of buckets: 126424
2010/02/16 14:15:49| Using 131072 Store buckets
2010/02/16 14:15:49| Max Mem  size: 102400 KB
2010/02/16 14:15:49| Max Swap size: 32768000 KB
2010/02/16 14:15:49| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2010/02/16 14:15:49| Store logging disabled
2010/02/16 14:15:49| Rebuilding storage in /cache (CLEAN)
2010/02/16 14:15:49| Using Least Load store dir selection
2010/02/16 14:15:49| Set Current Directory to /var/spool/squid
2010/02/16 14:15:49| Loaded Icons.
2010/02/16 14:15:50| Accepting accelerated HTTP connections at
0.0.0.0, port 80, FD 13.
2010/02/16 14:15:50| Accepting ICP messages at 0.0.0.0, port 3130, FD 14.
2010/02/16 14:15:50| Accepting SNMP messages on port 3401, FD 15.
2010/02/16 14:15:50| WCCP Disabled.
2010/02/16 14:15:50| Ready to serve requests.
2010/02/16 14:15:50| Configuring host,domain.com Parent host.domain.com/80/0
2010/02/16 14:15:50| Store rebuilding is  0.4% complete
2010/02/16 14:16:05| Store rebuilding is 66.1% complete
2010/02/16 14:16:12| Done reading /cache swaplog (948540 entries)
2010/02/16 14:16:12| Finished rebuilding storage from disk.
2010/02/16 14:16:12|948540 Entries scanned
2010/02/16 14:16:12| 0 Invalid entries.
2010/02/16 14:16:12| 0 With invalid flags.
2010/02/16 14:16:12|948540 Objects loaded.
2010/02/16 14:16:12| 0 Objects expired.
2010/02/16 14:16:12| 0 Objects cancelled.
2010/02/16 14:16:12| 0 Duplicate URLs purged.
2010/02/16 14:16:12| 0 Swapfile clashes avoided.
2010/02/16 14:16:12|   Took 23.0 seconds (41316.8 objects/sec).
2010/02/16 14:16:12| Beginning Validation Procedure
2010/02/16 14:16:13|262144 Entries Validated so far.
2010/02/16 14:16:13|524288 Entries Validated so far.
2010/02/16 14:16:13|786432 Entries Validated so far.
2010/02/16 14:16:13|   Completed Validation Procedure
2010/02/16 14:16:13|   Validated 948540 Entries
2010/02/16 14:16:13|   store_swap_size = 3794160k
2010/02/16 14:16:14| storeLateRelease: released 0 objects
2010/02/16 14:18:00| squidaio_queue_request: WARNING - Queue congestion
2010/02/16 14:18:04| 

Re: [squid-users] Squid restarts because of icap problem

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 11:56:08 -0800 (PST), akinf
fatih.a...@turkcellteknoloji.com.tr wrote:
 In squid logs , i get the following error. I configured squid to connect
 to a
 java based applicaiton 
 through icap. But squid gets error for some requests and restarts when
it
 gets the following errro. 
 Please help 
 
 assertion failed: BodyPipe.cc:339: checkout.checkedOutSize ==
 currentSize 
 2010/02/15 17:45:27| Starting Squid Cache version 3.1.0.8 for
 x86_64-unknown-linux-gnu...

... maybe an RTFM will help?

(1)
 http://wiki.squid-cache.org/SquidFaq/BugReporting
   Please note that:
* bug reports are only processed if they can be reproduced or
identified in the current ... development versions of Squid.
* If you are running an older version of Squid the first response will
be to ask you to upgrade unless the developer who looks at your bug report
immediately can identify that the bug also exists in the current versions.
[in which case its already reported so the bugzilla needs to be checked and
read]

 http://www.squid-cache.org/Versions/v3/3.1/
   Daily auto-generated release.  This is the most recent working code
committed to the SQUID_3_1.
squid-3.1.0.16-20100215Feb 15 2010

Beta code changes fast enough that anything older than 8 weeks is not
supported. Issues that are not verified as existing in the most current
release snapshot get triaged to 'ignore until proven/ more info required /
upgrade required' status.

Also ...

(2)
http://www.squid-cache.org/Support/contact.dyn:
 For reporting bugs and build issues in the BETA and HEAD code releases
please use the squid-dev list.

http://www.squid-cache.org/Versions/
 If you have any problems with a development release please write to our
squid-b...@squid-cache.org or squid-...@squid-cache.org lists. DO NOT write
to squid-users with code-related problems.


Amos


[squid-users] Squid reverse with two web servers in different TCP ports

2010-02-16 Thread Alejandro Facultad
Dear all, I have Squid 2.7 configured with reverse mode. I have two web 
sites:




OWA (webmail): 10.2.2.1 in port 80

Intranet: 10.2.2.2 in port 44000



Squid with OWA is working perfectly, but when I add to the squid.conf the 
lines for Intranet, the Intranet site does not respond (requests don't reach 
the Squid box apparently).




This is my config, taking into account Squid has the IP 10.1.1.1 and it's 
listen in port 80:




http_port 10.1.1.1:80 accel defaultsite=www.owa.gb



cache_peer 10.2.2.1 parent 80 0 no-query originserver login=PASS 
name=owaServer


cache_peer 10.2.2.2 parent 44000 0 no-query originserver name=intRanet



acl OWA dstdomain www.owa.gb

acl Inet dstdomain www.intranet.gb



cache_peer_access owaServer allow OWA

cache_peer_access intRanet allow Inet



never_direct allow OWA

never_direct allow Inet



http_access allow OWA

http_access allow Inet



http_access deny all



miss_access allow OWA

miss_access allow Inet



miss_access deny all



In the testing PC, both www.owa.gb and www.intranet.gb point to 10.1.1.1 
(Squid IP), and all the routing is OK.




After that, I have logs from OWA access but I haven't any log from intranet 
access at all in the /var/log/squid/access log file.




Can you tell me why Squid doesn't work with my second web site on port 44000 
???




Special thanks





Alejandro




Re: [squid-users] How can I cache most content

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 14:25:39 -0800 (PST), Landy Landy
landysacco...@yahoo.com wrote:
 Thanks for replying.
 
 As Marcus suggested, I added the following lines to squid.conf:
 
 acl blockanalysis01 dstdomain .scorecardresearch.com
 .google-analytics.com
 acl blockads01  dstdomain .rad.msn.com ads1.msn.com ads2.msn.com
 ads3.msn.com ads4.msn.com
 acl blockads02  dstdomain .adserver.yahoo.com
 pagead2.googlesyndication.com
 http_access deny blockanalysis01
 http_access deny blockads01
 http_access deny blockads02
 
 But, tried testing blocking other sites to see how squid handled it:
 acl blockads03 dstdomain .msn.com
 acl blockads04 dstdomain .testsite.com
 
 and I'm able to access these sites. Shouldn't these be blocked?

Only if you http_access deny them and do so in the right order relative
to all other http_access controls.

Amos


Re: [squid-users] Cache manager analysis

2010-02-16 Thread Chris Robertson

J. Webster wrote:

Ok - thanks.
2.HEAD - has this been included in the CentOS repository yet?


It doesn't look to even be in the CentOSPlus repos.


 I believe CentOS only has 2.6
  


Using the packaged software is fine if you are willing to accept the 
compromises that have been made.  RHEL 5 is based off of packages that 
were available when Fedora Core 6 was out (between October 2006 and May 
2007).  CentOS 5, of course uses the RHEL packages.


If you want performance, (or features, or compatibility) added in newer 
releases of software (be it Squid, Sendmail, openldap, etc.) you are 
going to have to compile it yourself.


Chris



Re: [squid-users] Squid reverse with two web servers in different TCP ports

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 20:02:34 -0300, Alejandro Facultad
alejandro_facul...@yahoo.com.ar wrote:
 Dear all, I have Squid 2.7 configured with reverse mode. I have two web 
 sites:
 
 OWA (webmail): 10.2.2.1 in port 80
 Intranet: 10.2.2.2 in port 44000
 
 Squid with OWA is working perfectly, but when I add to the squid.conf
the 
 lines for Intranet, the Intranet site does not respond (requests don't
 reach 
 the Squid box apparently).
 
 This is my config, taking into account Squid has the IP 10.1.1.1 and
it's 
 listen in port 80:
 
 http_port 10.1.1.1:80 accel defaultsite=www.owa.gb
 
 cache_peer 10.2.2.1 parent 80 0 no-query originserver login=PASS 
 name=owaServer
 
 cache_peer 10.2.2.2 parent 44000 0 no-query originserver name=intRanet
 
 acl OWA dstdomain www.owa.gb
 acl Inet dstdomain www.intranet.gb
 
 cache_peer_access owaServer allow OWA
 cache_peer_access intRanet allow Inet
 

You should also prevent requests crossing over between these two peers
explicitly.

  cache_peer_access owaServer deny all
  cache_peer_access intRanet deny all

 never_direct allow OWA
 never_direct allow Inet
 
 http_access allow OWA
 http_access allow Inet
 
 http_access deny all
 
 miss_access allow OWA
 miss_access allow Inet
 miss_access deny all
 
 In the testing PC, both www.owa.gb and www.intranet.gb point to 10.1.1.1

 (Squid IP), and all the routing is OK.
 
 After that, I have logs from OWA access but I haven't any log from
 intranet 
 access at all in the /var/log/squid/access log file.
 
 Can you tell me why Squid doesn't work with my second web site on port
 44000 
 ???

You will need to add vhost to the existing http_port line to handle
multiple domains now regardless of what else the fix requires.

Also check:

 * Does the LAN DNS point at Squid?

 * Do the LAN clients know that its now normal port 80 to access the
internal site?
   You can avoid transition problems by temporarily adding:
  http_port 10.1.1.1:44000 accel vhost defaultsite=www.intranet.gb

 * Now that you are serving both websites do you still want www.owa.gb to
be the default one visited? (defaultsite=)

Amos


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 14:30:06 -0800, Tory M Blue tmb...@gmail.com wrote:
 I'm starting to lose my mind here. New hardware test bed including a
 striped set of SSD's
 
 Same hardware, controller etc as my other squid servers, just added
 SSD's for testing. I've used default threads and I've built with 24
 threads. And what's blowing my mind is I get the error immediately
 upon startup of my cache server (what?) and when I start banging on it
 with over 75 connections p/sec..
 
 The issue with the well if you only see a few ignore, is that I
 actually get 500 errors when this happens. So something is going on
 and I'm not sure what.
 
 No Load
 No I/O wait.
 
 Fedora 12
 Squid2.7Stable7
 Dual Core
 6gigs of ram
 Striped SSD's
 
 And did I mention no wait and zero load when this happens?
 
 configure options:  '--host=i686-pc-linux-gnu'
 '--build=i686-pc-linux-gnu' '--target=i386-redhat-linux'
 '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
 '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
 '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib'
 '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
 '--mandir=/usr/share/man' '--infodir=/usr/share/info'
 '--exec_prefix=/usr' '--libexecdir=/usr/lib/squid'
 '--localstatedir=/var' '--datadir=/usr/share/squid'
 '--sysconfdir=/etc/squid' '--disable-dependency-tracking'
 '--enable-arp-acl' '--enable-follow-x-forwarded-for'
 '--enable-auth=basic,digest,negotiate'
 '--enable-basic-auth-helpers=NCSA,PAM,getpwnam,SASL'
 '--enable-digest-auth-helpers=password'
 '--enable-negotiate-auth-helpers=squid_kerb_auth'
 '--enable-external-acl-helpers=ip_user,session,unix_group'
 '--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
 '--enable-delay-pools' '--enable-epoll' '--enable-ident-lookups'
 '--with-large-files' '--enable-linux-netfilter' '--enable-referer-log'
 '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
 '--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
 '--enable-wccpv2' '--with-aio' '--with-maxfd=16384' '--with-dl'
 '--with-openssl' '--with-pthreads' '--with-aufs-threads=24'
 'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu'
 'target_alias=i386-redhat-linux' 'CFLAGS=-fPIE -Os -g -pipe
 -fsigned-char -O2 -g -march=i386 -mtune=i686' 'LDFLAGS=-pie'
 
 
 2010/02/16 14:15:49| Starting Squid Cache version 2.7.STABLE7 for
 i686-pc-linux-gnu...
 2010/02/16 14:15:49| Process ID 19222
 2010/02/16 14:15:49| With 4096 file descriptors available
 2010/02/16 14:15:49| Using epoll for the IO loop
 2010/02/16 14:15:49| Performing DNS Tests...
 2010/02/16 14:15:49| Successful DNS name lookup tests...
 2010/02/16 14:15:49| DNS Socket created at 0.0.0.0, port 52964, FD 6
 
 2010/02/16 14:15:49| User-Agent logging is disabled.
 2010/02/16 14:15:49| Referer logging is disabled.
 2010/02/16 14:15:49| Unlinkd pipe opened on FD 10
 2010/02/16 14:15:49| Swap maxSize 32768000 + 102400 KB, estimated
 2528492 objects
 2010/02/16 14:15:49| Target number of buckets: 126424
 2010/02/16 14:15:49| Using 131072 Store buckets
 2010/02/16 14:15:49| Max Mem  size: 102400 KB
 2010/02/16 14:15:49| Max Swap size: 32768000 KB
 2010/02/16 14:15:49| Local cache digest enabled; rebuild/rewrite every
 3600/3600 sec
 2010/02/16 14:15:49| Store logging disabled
 2010/02/16 14:15:49| Rebuilding storage in /cache (CLEAN)
 2010/02/16 14:15:49| Using Least Load store dir selection
 2010/02/16 14:15:49| Set Current Directory to /var/spool/squid
 2010/02/16 14:15:49| Loaded Icons.
 2010/02/16 14:15:50| Accepting accelerated HTTP connections at
 0.0.0.0, port 80, FD 13.
 2010/02/16 14:15:50| Accepting ICP messages at 0.0.0.0, port 3130, FD
14.
 2010/02/16 14:15:50| Accepting SNMP messages on port 3401, FD 15.
 2010/02/16 14:15:50| WCCP Disabled.
 2010/02/16 14:15:50| Ready to serve requests.
 2010/02/16 14:15:50| Configuring host,domain.com Parent
 host.domain.com/80/0
 2010/02/16 14:15:50| Store rebuilding is  0.4% complete
 2010/02/16 14:16:05| Store rebuilding is 66.1% complete
 2010/02/16 14:16:12| Done reading /cache swaplog (948540 entries)
 2010/02/16 14:16:12| Finished rebuilding storage from disk.
 2010/02/16 14:16:12|948540 Entries scanned
 2010/02/16 14:16:12| 0 Invalid entries.
 2010/02/16 14:16:12| 0 With invalid flags.
 2010/02/16 14:16:12|948540 Objects loaded.
 2010/02/16 14:16:12| 0 Objects expired.
 2010/02/16 14:16:12| 0 Objects cancelled.
 2010/02/16 14:16:12| 0 Duplicate URLs purged.
 2010/02/16 14:16:12| 0 Swapfile clashes avoided.
 2010/02/16 14:16:12|   Took 23.0 seconds (41316.8 objects/sec).

Hmm, this may be part of a hint. the other clean loads I've seen posted
recently, even on old hardware were in or very close to the millions of
objects per second. All its really doing so far is loading a text file into
memory from disk...

 2010/02/16 14:16:12| Beginning Validation Procedure
 2010/02/16 14:16:13|262144 Entries Validated so far.
 

Re: [squid-users] Different port per ip

2010-02-16 Thread Chris Robertson

cio...@gmail.com wrote:

Is it possible to restrict access to each ip but with a different port
for each ip?

for example:

user1 has access to ip1 port 8000
user2 has access to ip2 port 8001
  


Given proper declaration of the acls user1, user2, ip1, ip2, 
port8000 and port8001...


http_access allow user1 ip1 port8000
http_access deny ip1
http_access allow user2 ip2 port8001
http_access deny ip2

Chris




Re: [squid-users] Creating ip exception

2010-02-16 Thread Chris Robertson

Jose Ildefonso Camargo Tolosa wrote:

On Mon, Feb 15, 2010 at 12:34 AM, Martin Connell
mconn...@richmondfc.com.au wrote:
  

Dear Squid,

I am a new squid user, and I¹ve been relegated the task of creating a couple
of exceptions based on IP address.

So basically, we have our squid setup so certain sites are banned for all
users, facebook etc. However there are 2 pc¹s we want to have access
specifically to facebook for work purposes. Can you please point me in the
right direction as to how I would go about this, I¹ve been trying to google
this, I know I need to edit the squid.conf file but after looking through
that file not too sure how to do this. Any help would be much appreciated.


Hi!

Remember ACLs are used up to down, and the first one to hit will
be used, so, just add an allow for the IPs you want whitelisted,
before the ACL that blocks the pages.

I hope this helps,

Ildefonso Camargo

  


Otherwise, adjust the deny such that the deny itself excludes the IPs.

acl facebook dstdomain .facebook.com
acl researchIP 192.168.182.97
http_access deny facebook !researchIP

Have a look at the FAQ section on ACLs 
(http://wiki.squid-cache.org/SquidFaq/SquidAcl) for more...


Chris





Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
 2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue congestion
 2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue congestion

 What can I look for, if I don't believe it's IO wait or load (the box
 is sleeping), what else can it be. I thought creating a new build with
 24 threads would help but it has not (I can rebuild with 10 threads vs
 the default 18 (is that right?) I guess.

 Each of the warnings doubles the previous queue size, so


 I think its time we took this to the next level of debug.
 Please run a startup with the option -X and lets see what squid is really
 trying to do there.

 Amos


Okay not seeing anything exciting here. Nothing new with -X and/or
with both -X and -d

2010/02/16 16:17:51| squidaio_queue_request: WARNING - Queue congestion
2010/02/16 16:17:59| squidaio_queue_request: WARNING - Queue congestion

No additional information was provided other than what appears to be
something odd between my config and what squid is loading into it's
config.

for example;
conf file :maximum_object_size 1024 KB
What it says it's parsing:  2010/02/16 16:12:07| parse_line:
maximum_object_size 4096 KB

conf file: cache_mem 100 MB
What it says it's parsing: 2010/02/16 16:12:07| parse_line: cache_mem 8 MB

This may not be the answer, but it's odd for sure (

Nothing more on the queue congestion, no idea why this is happening.

2010/02/16 16:12:07| Memory pools are 'off'; limit: 0.00 MB
2010/02/16 16:12:07| cachemgrRegister: registered mem
2010/02/16 16:12:07| cbdataInit
2010/02/16 16:12:07| cachemgrRegister: registered cbdata
2010/02/16 16:12:07| cachemgrRegister: registered events
2010/02/16 16:12:07| cachemgrRegister: registered squidaio_counts
2010/02/16 16:12:07| cachemgrRegister: registered diskd
2010/02/16 16:12:07| diskd started
2010/02/16 16:12:07| authSchemeAdd: adding basic
2010/02/16 16:12:07| authSchemeAdd: adding digest
2010/02/16 16:12:07| authSchemeAdd: adding negotiate
2010/02/16 16:12:07| parse_line: authenticate_cache_garbage_interval 1 hour
2010/02/16 16:12:07| parse_line: authenticate_ttl 1 hour
2010/02/16 16:12:07| parse_line: authenticate_ip_ttl 0 seconds
2010/02/16 16:12:07| parse_line: authenticate_ip_shortcircuit_ttl 0 seconds
2010/02/16 16:12:07| parse_line: acl_uses_indirect_client on
2010/02/16 16:12:07| parse_line: delay_pool_uses_indirect_client on
2010/02/16 16:12:07| parse_line: log_uses_indirect_client on
2010/02/16 16:12:07| parse_line: ssl_unclean_shutdown off
2010/02/16 16:12:07| parse_line: sslproxy_version 1
2010/02/16 16:12:07| parse_line: zph_mode off
2010/02/16 16:12:07| parse_line: zph_local 0
2010/02/16 16:12:07| parse_line: zph_sibling 0
2010/02/16 16:12:07| parse_line: zph_parent 0
2010/02/16 16:12:07| parse_line: zph_option 136
2010/02/16 16:12:07| parse_line: dead_peer_timeout 10 seconds
2010/02/16 16:12:07| parse_line: cache_mem 8 MB
2010/02/16 16:12:07| parse_line: maximum_object_size_in_memory 8 KB
2010/02/16 16:12:07| parse_line: memory_replacement_policy lru
2010/02/16 16:12:07| parse_line: cache_replacement_policy lru
2010/02/16 16:12:07| parse_line: store_dir_select_algorithm least-load
2010/02/16 16:12:07| parse_line: max_open_disk_fds 0
2010/02/16 16:12:07| parse_line: minimum_object_size 0 KB
2010/02/16 16:12:07| parse_line: maximum_object_size 4096 KB
2010/02/16 16:12:07| parse_line: cache_swap_low 90
2010/02/16 16:12:07| parse_line: cache_swap_high 95
2010/02/16 16:12:07| parse_line: update_headers on
2010/02/16 16:12:07| parse_line: logfile_daemon /usr/lib/squid/logfile-daemon
2010/02/16 16:12:07| parse_line: cache_log /var/logs/cache.log
2010/02/16 16:12:07| parse_line: cache_store_log /var/logs/store.log
2010/02/16 16:12:07| parse_line: logfile_rotate 10
2010/02/16 16:12:07| parse_line: emulate_httpd_log off
2010/02/16 16:12:07| parse_line: log_ip_on_direct on
2010/02/16 16:12:07| parse_line: mime_table /etc/squid/mime.conf
2010/02/16 16:12:07| parse_line: log_mime_hdrs off
2010/02/16 16:12:07| parse_line: pid_filename /var/logs/squid.pid
2010/02/16 16:12:07| parse_line: debug_options ALL,1
2010/02/16 16:12:07| parse_line: log_fqdn off
2010/02/16 16:12:07| parse_line: client_netmask 255.255.255.255
2010/02/16 16:12:07| parse_line: strip_query_terms on
2010/02/16 16:12:07| parse_line: buffered_logs off
2010/02/16 16:12:07| parse_line: netdb_filename /var/logs/netdb.state
2010/02/16 16:12:07| parse_line: ftp_user Squid@
2010/02/16 16:12:07| parse_line: ftp_list_width 32
2010/02/16 16:12:07| parse_line: ftp_passive on
2010/02/16 16:12:07| parse_line: ftp_sanitycheck on
2010/02/16 16:12:07| parse_line: ftp_telnet_protocol on
2010/02/16 16:12:07| parse_line: diskd_program /usr/lib/squid/diskd-daemon
2010/02/16 16:12:07| parse_line: unlinkd_program /usr/lib/squid/unlinkd
2010/02/16 16:12:07| parse_line: storeurl_rewrite_children 5
2010/02/16 16:12:07| parse_line: storeurl_rewrite_concurrency 0
2010/02/16 16:12:07| parse_line: url_rewrite_children 5
2010/02/16 16:12:07| parse_line: url_rewrite_concurrency 0
2010/02/16 

Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 16:24:22 -0800, Tory M Blue tmb...@gmail.com wrote:
 2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue
congestion
 2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue
congestion

 What can I look for, if I don't believe it's IO wait or load (the box
 is sleeping), what else can it be. I thought creating a new build with
 24 threads would help but it has not (I can rebuild with 10 threads vs
 the default 18 (is that right?) I guess.

 Each of the warnings doubles the previous queue size, so


 I think its time we took this to the next level of debug.
 Please run a startup with the option -X and lets see what squid is
really
 trying to do there.

 Amos
 
 
 Okay not seeing anything exciting here. Nothing new with -X and/or
 with both -X and -d
 
 2010/02/16 16:17:51| squidaio_queue_request: WARNING - Queue congestion
 2010/02/16 16:17:59| squidaio_queue_request: WARNING - Queue congestion
 
 No additional information was provided other than what appears to be
 something odd between my config and what squid is loading into it's
 config.
 
 for example;
 conf file :maximum_object_size 1024 KB
 What it says it's parsing:  2010/02/16 16:12:07| parse_line:
 maximum_object_size 4096 KB
 
 conf file: cache_mem 100 MB
 What it says it's parsing: 2010/02/16 16:12:07| parse_line: cache_mem 8
MB
 
 This may not be the answer, but it's odd for sure (
 
 Nothing more on the queue congestion, no idea why this is happening.

To stdout/stderr or cache.log?  I think if thats to stdout/stderr might be
the defaults loading.
There should be two in that case. The later one correct.

Though it may be worth double checking for other locations of squid.conf.

Amos


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
On Tue, Feb 16, 2010 at 4:45 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Tue, 16 Feb 2010 16:24:22 -0800, Tory M Blue tmb...@gmail.com wrote:
 2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue
 congestion
 2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue
 congestion

 What can I look for, if I don't believe it's IO wait or load (the box
 is sleeping), what else can it be. I thought creating a new build with
 24 threads would help but it has not (I can rebuild with 10 threads vs
 the default 18 (is that right?) I guess.

 Each of the warnings doubles the previous queue size, so


 I think its time we took this to the next level of debug.
 Please run a startup with the option -X and lets see what squid is
 really
 trying to do there.

 Amos


 Okay not seeing anything exciting here. Nothing new with -X and/or
 with both -X and -d

 2010/02/16 16:17:51| squidaio_queue_request: WARNING - Queue congestion
 2010/02/16 16:17:59| squidaio_queue_request: WARNING - Queue congestion

 No additional information was provided other than what appears to be
 something odd between my config and what squid is loading into it's
 config.

 for example;
 conf file :maximum_object_size 1024 KB
 What it says it's parsing:  2010/02/16 16:12:07| parse_line:
 maximum_object_size 4096 KB

 conf file: cache_mem 100 MB
 What it says it's parsing: 2010/02/16 16:12:07| parse_line: cache_mem 8
 MB

 This may not be the answer, but it's odd for sure (

 Nothing more on the queue congestion, no idea why this is happening.

 To stdout/stderr or cache.log?  I think if thats to stdout/stderr might be
 the defaults loading.
 There should be two in that case. The later one correct.

 Though it may be worth double checking for other locations of squid.conf.

 Amos

That's from cache.log and I only have one squid.conf in /etc/squid and
the only other squid.conf is the http configuration for cachemgr in
/etc/httpd/conf.d

So it's really odd. Not getting anything to stdin/stdout

But don't want to get too into the config piece when the big deal
seems to be the congestion. Why more congestion with faster disks and
almost no load. I'm willing to run tests, tweak, rebuild with various
settings whatever, just would like to figure this out

Tory


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Amos Jeffries
On Tue, 16 Feb 2010 17:00:33 -0800, Tory M Blue tmb...@gmail.com wrote:
 On Tue, Feb 16, 2010 at 4:45 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:
 On Tue, 16 Feb 2010 16:24:22 -0800, Tory M Blue tmb...@gmail.com
wrote:
 2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue
 congestion
 2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue
 congestion

 What can I look for, if I don't believe it's IO wait or load (the
box
 is sleeping), what else can it be. I thought creating a new build
with
 24 threads would help but it has not (I can rebuild with 10 threads
vs
 the default 18 (is that right?) I guess.

 Each of the warnings doubles the previous queue size, so


 I think its time we took this to the next level of debug.
 Please run a startup with the option -X and lets see what squid is
 really
 trying to do there.

 Amos


 Okay not seeing anything exciting here. Nothing new with -X and/or
 with both -X and -d

 2010/02/16 16:17:51| squidaio_queue_request: WARNING - Queue
congestion
 2010/02/16 16:17:59| squidaio_queue_request: WARNING - Queue
congestion

 No additional information was provided other than what appears to be
 something odd between my config and what squid is loading into it's
 config.

 for example;
 conf file :maximum_object_size 1024 KB
 What it says it's parsing:  2010/02/16 16:12:07| parse_line:
 maximum_object_size 4096 KB

 conf file: cache_mem 100 MB
 What it says it's parsing: 2010/02/16 16:12:07| parse_line: cache_mem
8
 MB

 This may not be the answer, but it's odd for sure (

 Nothing more on the queue congestion, no idea why this is happening.

 To stdout/stderr or cache.log?  I think if thats to stdout/stderr might
 be
 the defaults loading.
 There should be two in that case. The later one correct.

 Though it may be worth double checking for other locations of
squid.conf.

 Amos
 
 That's from cache.log and I only have one squid.conf in /etc/squid and
 the only other squid.conf is the http configuration for cachemgr in
 /etc/httpd/conf.d

 /usr/local/squid/etc/squid/squid.conf ??

 
 So it's really odd. Not getting anything to stdin/stdout
 
 But don't want to get too into the config piece when the big deal
 seems to be the congestion. Why more congestion with faster disks and

I'm just thinking if there is actually another config being loaded, any
optimizations in the non-loaded one are useless.

Amos


Re: [squid-users] Reverse proxy Basic Accelerator

2010-02-16 Thread Jeff Peng
On Wed, Feb 17, 2010 at 3:21 AM, don Paolo Benvenuto
paolobe...@gmail.com wrote:
 Hi!

 I'm trying to configure a basic reverse proxy accelerator for mediawiki,
 and I found the instructions at
 http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

 but unfortunately they don't work with squid 2.7.

 When trying to run squid I get:

 ACL name 'all' not defined!
 FATAL bungled squid.conf line 6: cache_peer_access wiki deny all


I just updated the Perl module which can be used for generating a
config template for Squid reverse proxy quickly:

http://search.cpan.org/~pangj/Net-Squid-ReverseProxy-0.03/lib/Net/Squid/ReverseProxy.pm

-- 
Jeff Peng
Email: jeffp...@netzero.net
Skype: compuperson


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
  /usr/local/squid/etc/squid/squid.conf ??


 So it's really odd. Not getting anything to stdin/stdout

 But don't want to get too into the config piece when the big deal
 seems to be the congestion. Why more congestion with faster disks and

 I'm just thinking if there is actually another config being loaded, any
 optimizations in the non-loaded one are useless.

 Amos

Nope, only /etc/squid/squid.conf /etc/squid/squid.conf.default.

I've done a find on my system and no others.

I'm going to run debug on my SDA 2.7stable13 boxen to see if I see
something similar. But I still don't think the config is going to
cause the queue congestion on an idle box.

Tory


[squid-users] Re: Squid restarts because of icap problem

2010-02-16 Thread akinf

Thank you for reply, but what is rtfm?
-- 
View this message in context: 
http://n4.nabble.com/Squid-restarts-because-of-icap-problem-tp1557855p1558274.html
Sent from the Squid - Users mailing list archive at Nabble.com.