[squid-users] two connections - specific users ? problem....

2010-02-12 Thread David C. Heitmann

guten morgen meine experten,
kann man mit squid zwei internetzugänge betreuen und die verbindung 
bestimmten usern zuteilen?



good morning my experts,
can i managing two internet entries and to 
http://dict.leo.org/ende?lp=endep=5tY9AAsearch=to allocate 
connections to a 
http://dict.leo.org/ende?lp=endep=5tY9AAsearch=allocatespecific 
http://dict.leo.org/ende?lp=endep=5tY9AAsearch=specific user? 
http://dict.leo.org/ende?lp=endep=5tY9AAsearch=allocate



squid version 3.1.0.16
and debian lenny


thanks forward

dave


[squid-users] two connections - specific users ? problem....

2010-02-12 Thread David C. Heitmann

guten morgen meine experten,
kann man mit squid zwei internetzugänge betreuen und die verbindung
bestimmten usern zuteilen?


good morning my experts,
can i managing two internet entries and to allocate connections to a 
specific user?




squid version 3.1.0.16
and debian lenny


thanks forward

dave



Re: [squid-users] squid + dansguardian + auth

2010-02-12 Thread Jose Lopes
Hi!

Have you compiled squid with the option --enable-follow-x-forwarded-for ?
Please try to see the requests between dansguardian and squid, if there
is the header X-Forwarded-For.

Regards
Jose


Bruno Ricardo Santos wrote:
 X-Copyrighted-Material

 Hi !

 I'm using (after trying 3.0) version 2.6-stable-21.3 (source RPM compiled 
 after some changes)
 The danguardian version is 2.10.1.1 (from sources)

 Cheers,

 Bruno



 - Mensagem original -
 De: Jose Lopes jlo...@iportalmais.pt
 Para: squid-users squid-users@squid-cache.org
 Enviadas: Quarta-feira, 10 de Fevereiro de 2010 19:09:30 GMT +00:00 Hora de 
 Greenwich, Irlanda, Portugal
 Assunto: Re: [squid-users] squid + dansguardian + auth

 Hi!

 Which version of squid are you using?

 Regards
 Jose

 Jose Ildefonso Camargo Tolosa wrote:
   
 Hi!

 On Wed, Feb 10, 2010 at 9:35 AM, Bruno Ricardo Santos
 bvsan...@hal.min-saude.pt wrote:
   
 
 X-Copyrighted-Material


 Hi all!

 I'm having some trouble configuring squid with auth + dansguardian content 
 filter.

 It's all configured, but when i try to browse, i get an error:

 Dansguardian 400
 URL malformed

 Does authentication (and dansguardian filter) only works with transparent 
 proxy or do i have some configuration wrong ?

 If i configure the browser to access directly to the squid port, everything 
 works perfect...
 
   
 Ok.

   
 
 The problem, as i see it, is about the IP dansguardian passes to squid. 
 After a request, dansguardian give squid the local machine IP.
 
   
 Did you enabled the auth helpers on dansguardian?  Also, if squid
 works correctly: the problem is on dansguardian, and is, thus,
 off-topic for this list, you should write there.  Nevertheless, we
 have no problem helping, anyway.

   
 
 If i change some options in dansguardian, as originalip, i get the error 
 above !

 I've tried messing around with the following options:

 forwardedfor

 usexforwardedfor
 
   
 On Dansguardian: No, and Yes, but this is another issue.

   
 
 and in squid

 follow_x_forwarded_for
 
   
 Yeah, via ACL, only accept these from the dansguardian box (localhost,
 most likely).

 I hope this helps,

 Ildefonso Camargo
   
 

   


[squid-users] images occasionally don't get through

2010-02-12 Thread Folkert van Heusden
Situation: user using microsoft internet explorer 8 (altough I've seen it
with other versions as well), Squid version 2.7stable3-4.1 (Debian package).
Very often when that user surfes to a site, no images (or not all of them)
are shown. When he presses ctrl+refresh, they do appear. 
In the access.log I see this:

192.168.0.99 - - [12/Feb/2010:13:57:59 +] GET
http://SITE/equotes_BPNL/images/new_imgs/bl-watermark.jpg HTTP/1.1 200 621
http://SITE /equotes_BPNL/shoppingBasketNavigation.do Mozilla/4.0
(compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0) TCP_MISS:DIRECT

192.168.0.99 - - [12/Feb/2010:13:58:03 +] GET
http://SITE/equotes_BPNL/images/new_imgs/bl-watermark.jpg HTTP/1.1 200 3679
http://SITE/equotes_BPNL/shoppingBasketNavigation.do; Mozilla/4.0
(compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0) TCP_MISS:DIRECT

As you see, the first time only 621 bytes are sent, after the refresh the
full image as it seems is send.
I verified that the filesystem has enough diskspace (69% space used, 23%
inodes in use).

How can I fix this?


Folkert van Heusden

www.vanheusden.com


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Advisory SQUID-2010:2 - Remote Denial of Service issue in HCTP

2010-02-12 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2010:2
__

Advisory ID:SQUID-2010:2
Date:   February 12, 2010
Summary:Remote Denial of Service issue in HCTP
Affected versions:  Squid 2.x,
Squid 3.0 - 3.0.STABLE23
Fixed in version:   Squid 3.0.STABLE24
__

http://www.squid-cache.org/Advisories/SQUID-2010_2.txt
__

Problem Description:

 Due to incorrect processing Squid is vulnerable to a denial of
 service attack when receiving specially crafted HTCP packets.

__

Severity:

 This problem allows any machine to perform a denial of service
 attack on the Squid service when its HTCP port is open.

__

Updated Packages:

 This bug is fixed by Squid versions 3.0.STABLE24

 In addition, patches addressing these problems can be found In
 our patch archives.

Squid 2.7:
 http://www.squid-cache.org/Versions/v2/2.7/changesets/12600.patch

Squid 3.0:
http://www.squid-cache.org/Versions/v3/3.0/changesets/3.0-ADV-2010_2.patch


 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid-3.0 releases without htcp_port in their configuration
 file (the default) are not vulnerable.

 Squid-3.1 releases are not vulnerable.

 For unpatched Squid-2.x and Squid-3.0 releases; if your cache.log
 contains a line with Accepting HTCP messages on port when run
 with debug level 1 (debug_options ALL,1). Your Squid is
 vulnerable.

 Alternatively; for unpatched Squid-2.x and Squid-3.0 releases.
 If the command
   squidclient mgr:config | grep htcp_port
 displays a non-zero HTCP port your Squid is vulnerable.

__

Workarounds:

 For Squid-2.x:
  * Configuring htcp_port 0 explicitly

 For Squid-3.0:
  * Ensuring that any unnecessary htcp_port setting left in
squid.conf after upgrading to 3.0 are removed.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If your install and build Squid from the original Squid sources
 then the squid-users@squid-cache.org mailing list is your primary
 support point. For subscription details see
 http://www.squid-cache.org/Support/mailing-lists.html.

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 http://www.squid-cache.org/bugs/.

 For reporting of security sensitive bugs send an email to the
 squid-b...@squid-cache.org mailing list. It's a closed list
 (though anyone can post) and security related bug reports are
 treated in confidence until the impact has been established.

__

Credits:

 The vulnerability was discovered by Kieran Whitbread.

__

Revision history:

 2010-02-12 14:11 GMT Initial Release
__
END


[squid-users] RE: images occasionally don't get through

2010-02-12 Thread Folkert van Heusden
To help the debugging I also found an url that is accessible to everyone:

failed:
--
192.168.0.90 - - [12/Feb/2010:15:28:21 +] GET
http://www.ibm.com/common/v15/main.css HTTP/1.0 200 10015
http://www-03.ibm.com/systems/hardware/browse/linux/?c=serversintron=Linux
2001t=ad Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)
TCP_MEM_HIT:DIRECT

ok after ctrl+refresh:
-
192.168.0.90 - - [12/Feb/2010:15:29:03 +] GET
http://www.ibm.com/common/v15/main.css HTTP/1.0 200 54200
http://www-03.ibm.com/systems/hardware/browse/linux/?c=serversintron=Linux
2001t=ad Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
InfoPath.2; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)
TCP_CLIENT_REFRESH_MISS:DIRECT


-Original Message-
From: Folkert van Heusden [mailto:folkert.van.heus...@bpsolutions.nl] 
Sent: vrijdag 12 februari 2010 14:40
To: squid-users@squid-cache.org
Subject: [squid-users] images occasionally don't get through

Situation: user using microsoft internet explorer 8 (altough I've seen it
with other versions as well), Squid version 2.7stable3-4.1 (Debian package).
Very often when that user surfes to a site, no images (or not all of them)
are shown. When he presses ctrl+refresh, they do appear. 
In the access.log I see this:

192.168.0.99 - - [12/Feb/2010:13:57:59 +] GET
http://SITE/equotes_BPNL/images/new_imgs/bl-watermark.jpg HTTP/1.1 200 621
http://SITE /equotes_BPNL/shoppingBasketNavigation.do Mozilla/4.0
(compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0) TCP_MISS:DIRECT

192.168.0.99 - - [12/Feb/2010:13:58:03 +] GET
http://SITE/equotes_BPNL/images/new_imgs/bl-watermark.jpg HTTP/1.1 200 3679
http://SITE/equotes_BPNL/shoppingBasketNavigation.do; Mozilla/4.0
(compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0) TCP_MISS:DIRECT

As you see, the first time only 621 bytes are sent, after the refresh the
full image as it seems is send.
I verified that the filesystem has enough diskspace (69% space used, 23%
inodes in use).

How can I fix this?


Folkert van Heusden

www.vanheusden.com


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Squid 3.0.STABLE24 is available

2010-02-12 Thread Amos Jeffries

The Squid HTTP Proxy team is pleased to announce the
availability of the Squid-3.0.STABLE24 release!


This release contains the fix for Advisory SQUID-2010:2
Remote Denial of Service in HTCP.

All Squid-3.0 users needing HTCP support are advised to upgrade to this 
release as soon as possible.


All Squid-3.0 users not needing HTCP support please check that htcp_port 
settings have been removed from your squid.conf file.



Please refer to the release notes at
http://www.squid-cache.org/Versions/v3/3.0/RELEASENOTES.html
if and when you are ready to make the switch to Squid-3.

This new release can be downloaded from our HTTP or FTP servers

 http://www.squid-cache.org/Versions/v3/3.0/
 ftp://ftp.squid-cache.org/pub/squid/
 ftp://ftp.squid-cache.org/pub/archive/3.0/

or the mirrors. For a list of mirror sites see

 http://www.squid-cache.org/Download/http-mirrors.dyn
 http://www.squid-cache.org/Download/mirrors.dyn

If you encounter any issues with this release please file a bug report.
 http://bugs.squid-cache.org/


Amos Jeffries


[squid-users] R: [squid-users] Allowing links inside websites in whitelist

2010-02-12 Thread CASALI COMPUTERS - Michele Brodoloni
Hello, it seems that my problem was caused by the extra line:
 http_access deny utenti_tg24

Removing it resolved my problem, and the user can only go to the sites listed 
in the whitelist and nowhere else.
No further annoying auth request popups.

Now the conf look like:
 acl tg24 url_regex /etc/squid/whitelist_tg24
 http_access allow utenti_tg24 tg24

And now it works like a charm... I just had to add a couple of urls on the 
whitelist to make it work properly.

Thanks

-Messaggio originale-
Da: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Inviato: mercoledì 10 febbraio 2010 12.19
A: squid-users@squid-cache.org
Oggetto: Re: [squid-users] Allowing links inside websites in whitelist

CASALI COMPUTERS - Michele Brodoloni wrote:
 Hello,
 i?m using Squid Version 2.6.STABLE21 with squid_ldap_group auth helper for 
 authenticating groups of users.
 
 My problem is that some groups need to access certain sites only, but these 
 sites contain links to other external content outside the whitelist causing 
 squid popping up the annoying login box repeteadly. Is there a way to make 
 squid follow (or deny) those links without annoying the user?
 I simply would like that auth is requested just once.. if the user is not 
 allowed, just deny it without requesting authentication again?
 

What do you mean again? To get auth popups means they are not 
authenticated at all yet. Were they already authenticated something must 
have gone badly.

Your config confirms that. Anybody visiting the whitelist gets through 
without authenticating at all.
The instant they go anywhere else they are verified for authentication 
and the blacklist tested.


To let people browse the web without auth popups is to remove auth 
completely, or to whitelist every site they need to visit. There seems 
to be somethign broken if the login box is popping up repeatedly.

You might try auto-blacklisting anything not whitelisted which is 
referred to from the whitelist sites.

Something like this just after the whitelist itself will prevent _any_ 
non-whitelisted link from a whitelisted page without involving auth:

   acl whiteRef referer_regex /etc/squid/whitelist
   http_access deny whiteRef

Be careful though. If you make that an auto-allow you enable anybody on 
access the proxy by sending an easily forged header to you.
You will also need to do something to let people click on actual wanted 
links on those whitelisted pages.


 Here?s my configuration (squid.conf) snippet:
 
 #
 auth_param basic program /usr/lib64/squid/squid_ldap_auth -b 
 dc=server,dc=local -f uid=%s -h 127.0.0.1
 auth_param basic children 10
 auth_param basic realm Server Proxy Server
 auth_param basic credentialsttl 8 hours
 
 external_acl_type ldap_group %LOGIN /usr/lib64/squid/squid_ldap_group -b 
 ou=Groups,dc=server,dc=local -f 
 ((memberUid=%u)(cn=%g)(objectClass=posixGroup)) -h 127.0.0.1 -d
 
 acl utenti_tutti external ldap_group grp-proxy
 acl utenti_tg24  external ldap_group grp-tg24
 
 acl retelocale src 192.0.0.0/255.255.255.0

acl retelocale src 192.0.0.0/24

 acl whitelist dstdom_regex /etc/squid/whitelist
 http_access allow retelocale whitelist
 
 acl autenticati proxy_auth REQUIRED
 
 acl blacklist dstdom_regex /etc/squid/blacklist
 http_access deny  utenti_tutti blacklist
 http_access allow utenti_tutti
 
 acl tg24 url_regex /etc/squid/whitelist_tg24
 http_access allow utenti_tg24 tg24
 http_access deny utenti_tg24
 #
 
 Thank you very much 


Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
   Current Beta Squid 3.1.0.16




[squid-users] windows update problem thorw squid

2010-02-12 Thread David C. Heitmann

how i can do windows update throw squid 3.1.0.16???




Re: [squid-users] high load issues

2010-02-12 Thread Justin Lintz
On Wed, Feb 10, 2010 at 4:23 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 http_access allow localhost
 http_access allow all

 Why?

Sorry I should mention this is running in a reverse proxy setup


 So what is the request/second load on Squid?
 Is RAID involved?

The underlying disks are running in a RAID 1 configuration.  Each
server is seeing around 170 rec/sec during peak traffic


 You only have 4GB of storage. Thats just a little bit above trivial for
 Squid.

 With 4GB of RAM cache and 4GB of disk cache, I'd raise the maximum object
 size a bit. or at least remove the maximum in-memory object size. It's
 forcibly pushing half the objects to disk, when there is just as much space
 in RAM to hold them.

 Amos

Would this only be the case for a forward proxy?  I'd say probably
less than 1% of our objects are anywhere near the memory limit.
Thanks for the reply


Re: [squid-users] Issues with storeurl_rewrite

2010-02-12 Thread John Villa
Hello,
So there is a bug.  This was fixed in some branch, but I do not know
if it's fixed in 2.7-STABLE[4,6,7].   I was seeing the same issue
where sometimes I can get the storeurl_rewrite to work, but other
times I can't. This is the bug report;
http://bugs.squid-cache.org/show_bug.cgi?id=2678
I have seen other threads address this issue but they appear to be
stale or closed ie:
http://www.squid-cache.org/mail-archive/squid-users/200906/0012.html
Was wondering if anyone knows which branch this fix was applied to.
Also I tested with 2.HEAD so I am not sure if the fix was committed.
tried: squid-2.7.STABLE4, squid-2.7.STABLE6,
squid-2.7.STABLE-720100211,  squid-2.HEAD-20100212, and 2.7-STABLE7
Thanks,
-John

On Thu, Feb 11, 2010 at 7:14 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 John Villa wrote:

 The servers are cacheable yes.
 THank You,
 On Feb 11, 2010, at 4:04 PM, Chris Robertson wrote:

 John Villa wrote:

 Hello,
 What I am trying to do is use storeurl_rewrite to rewrite some urls
 before storing  some URLs in cache but I always get cache MISS for those
 URLs configured in acl storeurl_rewrite_list . Is there something I might
 be missing or perhaps I need to declare certain values above or below 
 others
 in the configuration? ANy help is appreciated.
 Thank You,
 -John

 Have you checked the cacheability of the links using something like
 REDBot (http://redbot.org/)?

 Chris



 Have you tried showing anyone who may be able to help your exact
 configuration?

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16



[squid-users] Cache manager analysis

2010-02-12 Thread J. Webster

What is the best place to start with in cache analysis?
Would it be cache size, memory object size, IO, etc.?
I'm looking to optimise the settings for my squid server.

Server:    about 220GB available for the cache, I'm only using 4 MB at 
present as in the config below.   
  system D2812-A2
/0    bus    D2812-A2
/0/0  memory 110KiB BIOS
/0/4  processor  Intel(R) Core(TM)2 Duo CPU 
E7300  @ 2.66GHz
/0/4/5    memory 64KiB L1 cache
/0/4/6    memory 3MiB L2 cache
/0/4/0.1  processor  Logical CPU
/0/4/0.2  processor  Logical CPU
/0/7  memory 3MiB L3 cache
/0/2a memory 1GiB System Memory
/0/2a/0   memory 1GiB DIMM DDR2 Synchronous 667 
MHz (1.5 ns)
/0/2a/1   memory DIMM DDR2 Synchronous 667 MHz 
(1.5 ns) [empty]
/0/2a/2   memory DIMM DDR2 Synchronous 667 MHz 
(1.5 ns) [empty]
/0/2a/3   memory DIMM DDR2 Synchronous 667 MHz 
(1.5 ns) [empty]
/0/1  processor
/0/1/0.1  processor  Logical CPU
/0/1/0.2  processor  Logical CPU


Current squid.conf:
-
auth_param basic realm Proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl cacheadmin src 88.xxx.xxx.xxx
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl maxuser max_user_ip -s 2
acl CONNECT method CONNECT
http_access allow manager localhost
http_access allow manager cacheadmin
http_access deny manager
http_access allow ncsa_users
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny maxuser
http_access allow localhost
http_access deny all
icp_access allow all
http_port 8080
http_port 88.xxx.xxx.xxx:80
hierarchy_stoplist cgi-bin ?
cache_mem 100 MB
maximum_object_size_in_memory 50 KB
cache_replacement_policy heap LFUDA
cache_dir aufs /var/spool/squid 4 16 256
maximum_object_size 50 MB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
buffered_logs on
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
half_closed_clients off
cache_mgr a...@aaa.com
cachemgr_passwd aaa all
visible_hostname ProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
forwarded_for off
client_db off
coredump_dir /var/spool/squid

  
_
Do you have a story that started on Hotmail? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

Re: [squid-users] PHP Auth Proxy

2010-02-12 Thread Matus UHLAR - fantomas
On 09.02.10 00:45, Bruno de Oliveira Bastos wrote:
 I need some function in PHP to user put username and password and the
 page in PHP auth in Squid

is this the same request as you poste4d on Feb 7th?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
- Have you got anything without Spam in it?
- Well, there's Spam egg sausage and Spam, that's not got much Spam in it.


Re: [squid-users] PHP Auth Proxy

2010-02-12 Thread Bruno de Oliveira Bastos
No, i try to mount a captive portal for free access where user can
register and log automatic.




2010/2/12 Matus UHLAR - fantomas uh...@fantomas.sk:
 On 09.02.10 00:45, Bruno de Oliveira Bastos wrote:
 I need some function in PHP to user put username and password and the
 page in PHP auth in Squid

 is this the same request as you poste4d on Feb 7th?

 --
 Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 - Have you got anything without Spam in it?
 - Well, there's Spam egg sausage and Spam, that's not got much Spam in it.



Re: [squid-users] Issues with storeurl_rewrite

2010-02-12 Thread John Villa
Got this to work by applying the patch;
http://bugs.squid-cache.org/show_bug.cgi?id=2678
to squid-2.HEAD. I am wondering if there are any risk running this in
production. I appreciate any feedback.
Thanks,
-John

On Fri, Feb 12, 2010 at 11:43 AM, John Villa john.joe.vi...@gmail.com wrote:
 Hello,
 So there is a bug.  This was fixed in some branch, but I do not know
 if it's fixed in 2.7-STABLE[4,6,7].   I was seeing the same issue
 where sometimes I can get the storeurl_rewrite to work, but other
 times I can't. This is the bug report;
 http://bugs.squid-cache.org/show_bug.cgi?id=2678
 I have seen other threads address this issue but they appear to be
 stale or closed ie:
 http://www.squid-cache.org/mail-archive/squid-users/200906/0012.html
 Was wondering if anyone knows which branch this fix was applied to.
 Also I tested with 2.HEAD so I am not sure if the fix was committed.
 tried: squid-2.7.STABLE4, squid-2.7.STABLE6,
 squid-2.7.STABLE-720100211,  squid-2.HEAD-20100212, and 2.7-STABLE7
 Thanks,
 -John

 On Thu, Feb 11, 2010 at 7:14 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 John Villa wrote:

 The servers are cacheable yes.
 THank You,
 On Feb 11, 2010, at 4:04 PM, Chris Robertson wrote:

 John Villa wrote:

 Hello,
 What I am trying to do is use storeurl_rewrite to rewrite some urls
 before storing  some URLs in cache but I always get cache MISS for those
 URLs configured in acl storeurl_rewrite_list . Is there something I 
 might
 be missing or perhaps I need to declare certain values above or below 
 others
 in the configuration? ANy help is appreciated.
 Thank You,
 -John

 Have you checked the cacheability of the links using something like
 REDBot (http://redbot.org/)?

 Chris



 Have you tried showing anyone who may be able to help your exact
 configuration?

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16




[squid-users] url_rewrite_program question

2010-02-12 Thread Landy Landy
Hello.

I would like to test something I found on the internet: videocache and 
thundercache along with squid. Already videocache works great with squid but, 
thundercache uses the same directive so, I don't know how to use the two of 
these together. I tried:

url_rewrite_program /usr/bin/python /usr/share/videocache/videocache.py 
/usr/bin/php /etc/thundercache/load.php 

but, this doesn't work.

Any suggestions?

Thanks in advanced.




  


[squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-12 Thread Tory M Blue
Squid 2.7Stable7
F12
AUFS on a ext3 FS
6gigs ram
dual proc
cache_dir aufs /cache 32000 16 256

FilesystemSize  Used Avail Use% Mounted on
/dev/vda2  49G  3.8G   42G   9% /cache

configure options:  '--host=i686-pc-linux-gnu'
'--build=i686-pc-linux-gnu' '--target=i386-redhat-linux'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info'
'--exec_prefix=/usr' '--libexecdir=/usr/lib/squid'
'--localstatedir=/var' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-arp-acl'
'--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,getpwnam,multi-domain-NTLM,SASL,squid_radius_auth'
'--enable-ntlm-auth-helpers=no_check,fakeauth'
'--enable-digest-auth-helpers=password,ldap'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--with-large-files'
'--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--enable-esi' '--with-aio'
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl'
'--with-openssl' '--with-pthreads' 'build_alias=i686-pc-linux-gnu'
'host_alias=i686-pc-linux-gnu' 'target_alias=i386-redhat-linux'
'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -march=i386
-mtune=i686' 'LDFLAGS=-pie'

No load to speak of, very little iowait. Threads were configured as the default.

This is running on a striped pair of SSD's and is only a test script
that (ya it's hitting it a bit hard), but nothing that squid nor my
hardware should have an issue with.

I've searched and really there does not appear to be a solid answer,
except running out of cpu or running out of iops, neither appears to
be the case here. Figured if it was a thread issue, I would see a
bottleneck on my server? (ya?). Also the if it only happens a couple
of times ignore it. This is just some testing and I believe this
congestion is possibly causing the 500 errors I'm seeing while running
my script.

Any pointers, where to look etc? (2.7stable6 on fc6/xen kernel) had no
such issues (yes, the SSD's are a new variable (but otherwise
identical hardware).

Thanks
Tory

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  13.130.00   40.664.800.00   41.41

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
vda1539.50 11516.00  6604.00  23032  13208

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  13.600.00   38.29   11.080.00   37.03

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
vda1385.07 11080.60 0.00  22272  0


[...@cache01 ~]$ free
 total   used   free sharedbuffers cached
Mem:   5039312 4217684617544  0  50372 183900
-/+ buffers/cache: 1874964851816
Swap:  7143416  07143416

Totals since cache startup:
sample_time = 1266055655.813016 (Sat, 13 Feb 2010 10:07:35 GMT)
client_http.requests = 810672
client_http.hits = 683067
client_http.errors = 0
client_http.kbytes_in = 171682
client_http.kbytes_out = 2145472
client_http.hit_kbytes_out = 1809060
server.all.requests = 127606
server.all.errors = 0
server.all.kbytes_in = 321960
server.all.kbytes_out = 38104
server.http.requests = 127606
server.http.errors = 0
server.http.kbytes_in = 321960
server.http.kbytes_out = 38104
server.ftp.requests = 0
server.ftp.errors = 0
server.ftp.kbytes_in = 0
server.ftp.kbytes_out = 0
server.other.requests = 0
server.other.errors = 0
server.other.kbytes_in = 0
server.other.kbytes_out = 0
icp.pkts_sent = 0
icp.pkts_recv = 0
icp.queries_sent = 0
icp.replies_sent = 0
icp.queries_recv = 0
icp.replies_recv = 0
icp.query_timeouts = 0
icp.replies_queued = 0
icp.kbytes_sent = 0
icp.kbytes_recv = 0
icp.q_kbytes_sent = 0
icp.r_kbytes_sent = 0
icp.q_kbytes_recv = 0
icp.r_kbytes_recv = 0
icp.times_used = 0
cd.times_used = 0
cd.msgs_sent = 0
cd.msgs_recv = 0
cd.memory = 0
cd.local_memory = 487
cd.kbytes_sent = 0
cd.kbytes_recv = 0
unlink.requests = 0
page_faults = 1
select_loops = 467112
cpu_time = 681.173445
wall_time = -40015.496720
swap.outs = 126078
swap.ins = 1366134
swap.files_cleaned = 0
aborted_requests = 0


[squid-users] Squid reverse in front of an OWA webmail

2010-02-12 Thread Alejandro Facultad
Dear all, I have a private client network accessing an OWA server (Outlook 
Web Access over Exchange) through a Squid proxy in reverse mode:




CLIENT NETWORK -(HTTP)SQUID---(HTTP)---OWA



No SSL at all in any path.



The data are:



IP_client_network: 192.168.0.0/16

IP_squid: 10.1.1.1

IP_owa: 10.2.2.2

Domain_name_owa: www.correo.gb



I've done this main configuration in squid.conf:



https_port 10.1.1.1:80 defaultsite=www.correo.gb



cache_peer 10.2.2.2 parent 80 0 no-query originserver login=PASS 
name=owaServer




acl OWA dstdomain www.correo.gb

cache_peer_access owaServer allow OWA

never_direct allow OWA



# lock down access to only query the OWA server

http_access allow OWA

http_access deny all

miss_access allow OWA

miss_access deny all



After that when I access through a web browser from the client network and 
type http://www.correo.gb, I don't succed and the access.log from squid tell 
me this:




192.168.0.22 TCP_MISS/302 584 GET http://www.correo.gb/ - 
FIRST_UP_PARENT/owaServer text/html




Please can you help me or give me a more explicit howto on this topic ???



Special thanks,



Alejandro




[squid-users] what happens whens quid cache gets full?

2010-02-12 Thread J. Webster

I have my squid cache size set to 4 - is this in MB or kb?
What happens when the cache approaches its max size, do I have to manually 
clear it or does squid take care of that?
Thanks
  
_
Got a cool Hotmail story? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

Re: [squid-users] url_rewrite_program question

2010-02-12 Thread Chris Robertson

Landy Landy wrote:

Hello.

I would like to test something I found on the internet: videocache and 
thundercache along with squid. Already videocache works great with squid but, 
thundercache uses the same directive so, I don't know how to use the two of 
these together. I tried:

url_rewrite_program /usr/bin/python /usr/share/videocache/videocache.py /usr/bin/php /etc/thundercache/load.php 
  


url_rewrite_program is a global directive that only accepts one script.


but, this doesn't work.

Any suggestions?
  


It might be possible to make a wrapper script that takes the normal 
information (URL, client_ip / fqdn, user, method) and passes the URL 
to videocache and/or thundercache (as appropriate or serially feeding 
the output of one into the other), but I have no idea about the 
efficiency of doing so.



Thanks in advanced.
  


Chris



Re: [squid-users] what happens whens quid cache gets full?

2010-02-12 Thread Chris Robertson

J. Webster wrote:

I have my squid cache size set to 4 - is this in MB or kb?
  


To quote http://www.squid-cache.org/Doc/config/cache_dir/:

Usage:

cache_dir Type Directory-Name Fs-specific-data [options]
...
The ufs store type:
...
cache_dir ufs Directory-Name Mbytes L1 L2 [options]

'Mbytes' is the amount of disk space (MB) to use under this
directory.  The default is 100 MB.  Change this to suit your
configuration.  Do NOT put the size of your disk drive here.
Instead, if you want Squid to use the entire disk drive,
subtract 20% and use that value.



What happens when the cache approaches its max size, do I have to manually 
clear it or does squid take care of that?
  


From http://www.squid-cache.org/Doc/config/cache_swap_low/:

The low- and high-water marks for cache object replacement.
Replacement begins when the swap (disk) usage is above the
low-water mark and attempts to maintain utilization near the
low-water mark.  As swap utilization gets close to high-water
mark object eviction becomes more aggressive.  If utilization is
close to the low-water mark less replacement is done each time.

Defaults are 90% and 95%. If you have a large cache, 5% could be
hundreds of MB. If this is the case you may wish to set these
numbers closer together.



Thanks


Chris



Re: [squid-users] high load issues

2010-02-12 Thread Amos Jeffries

Justin Lintz wrote:

On Wed, Feb 10, 2010 at 4:23 PM, Amos Jeffries squ...@treenet.co.nz wrote:


http_access allow localhost
http_access allow all

Why?


Sorry I should mention this is running in a reverse proxy setup


So what is the request/second load on Squid?
Is RAID involved?


The underlying disks are running in a RAID 1 configuration.  Each
server is seeing around 170 rec/sec during peak traffic


Sigh. My advice on RAID is to avoid it like a plague.
There are people who do get good results, but they all seems to be able 
to afford the most expensive hardware arrays as as well.


From RAID1, you can almost halve the disk IO load by removing it.




You only have 4GB of storage. Thats just a little bit above trivial for
Squid.

With 4GB of RAM cache and 4GB of disk cache, I'd raise the maximum object
size a bit. or at least remove the maximum in-memory object size. It's
forcibly pushing half the objects to disk, when there is just as much space
in RAM to hold them.

Amos


Would this only be the case for a forward proxy?  I'd say probably
less than 1% of our objects are anywhere near the memory limit.
Thanks for the reply


Less than 1% of your objects are close to or over 4KB in size? Okay 
then, you _really_ want to be using COSS.


All the UFS storage types use 4KB chunks to read/write from disk. COSS 
reads/writes in MB chunks, but aggregates many small objects very 
efficiently into that space at once.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Cache manager analysis

2010-02-12 Thread Amos Jeffries

J. Webster wrote:

What is the best place to start with in cache analysis?
Would it be cache size, memory object size, IO, etc.?
I'm looking to optimise the settings for my squid server.


Step 0) migrate to the latest Squid 2.7 or 3.1 or if possible 2.HEAD 
(that one is only nominally beta, it's very stable in reality)


1) Start by defining 'optimize' ... are you going to prioritize...
 Faster service?
 More bandwidth saving?
 More client connections?

2a) For faster service, look at DNS delays, disk IO delays, maximizing 
cacheable objects (dynamic objects etc).


2b) For pure bandwidth savings start with a look at object cacheablity. 
Check dynamics are being cached, ranges are being fetched in full, etc


3) Then profile all the objects stored over a reasonably long period, 
looking at size. compare with the age of objects being discarded.


3a) tune the storage limits to prioritize the storage locations. giving 
priority to RAM, then COSS, then AUFS/diskd.


3b) set the storage limits as high as possible to maximize amount of 
data stored. anywhere.


4) take a good long look at your access controls and in particular the 
types speedy/fast/slow. You may get some speed benefits from fixing up 
the ordering a bit. regex are killers, remote lookups (helpers, or DNS) 
are second worst.

  (some performance hints below)

5) repeat from (2b) as often as possible. concentrate traffic which 
seems to logically be storeable but gets a TCP_MISS anyway.


Objects served from cache lead to faster service ties for those objects, 
so the speed vs bandwidth are inter-related somewhat. But there is a 
tipping point somewhere where tuning one starts to impact the other.





Server:about 220GB available for the cache, I'm only using 4 MB at present as in the config below.   
  system D2812-A2

/0busD2812-A2
/0/0  memory 110KiB BIOS
/0/4  processor  Intel(R) Core(TM)2 Duo CPU 
E7300  @ 2.66GHz
/0/4/5memory 64KiB L1 cache
/0/4/6memory 3MiB L2 cache
/0/4/0.1  processor  Logical CPU
/0/4/0.2  processor  Logical CPU
/0/7  memory 3MiB L3 cache
/0/2a memory 1GiB System Memory
/0/2a/0   memory 1GiB DIMM DDR2 Synchronous 667 
MHz (1.5 ns)
/0/2a/1   memory DIMM DDR2 Synchronous 667 MHz 
(1.5 ns) [empty]
/0/2a/2   memory DIMM DDR2 Synchronous 667 MHz 
(1.5 ns) [empty]
/0/2a/3   memory DIMM DDR2 Synchronous 667 MHz 
(1.5 ns) [empty]
/0/1  processor
/0/1/0.1  processor  Logical CPU
/0/1/0.2  processor  Logical CPU


Current squid.conf:
-
auth_param basic realm Proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl all src 0.0.0.0/0.0.0.0


all src all


acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255


acl localhost src 127.0.0.1


acl cacheadmin src 88.xxx.xxx.xxx
acl to_localhost dst 127.0.0.0/8


acl to_localhost dst 127.0.0.0/8 0.0.0.0/32


acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl maxuser max_user_ip -s 2
acl CONNECT method CONNECT
http_access allow manager localhost
http_access allow manager cacheadmin


Hint: add the localhost IP to the cacheadmin ACL and drop one full set 
of allow manager localhost tests.



http_access deny manager
http_access allow ncsa_users


Hint: drop the authentication down ...


http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost


... to here. All the attacks against your proxy for bad ports and 
sources will be dropped quickly by the security blanket settings. Load 
on your auth server will reduce and may speed up it's response time.


Hint 2: if possible, define an ACL or the network ranges where you 
accept logins. Use it like so:


  http_access allow localnet ncsa_users

 ... once again that speeds up the rejections, and helps by reducing 
the number of times the slow auth 

[squid-users] Re: Configuration to allow users to connect via user-pass

2010-02-12 Thread cio...@gmail.com
I'm having a big issue trying to configure squid to allow users to
connect only via user/pass. I have 8 ip's on my server and I want each
user to be able to connect to a single ip using their credentials but
all I get when I try in firefox is this message: The proxy server is
refusing connections so I must have done something wrong. I'm new to
squid. I'm willing to pay for assistance so please contact me if
interested. Anyhow, the help is much appreciated.

Here's my squid config:


http_port 
visible_hostname weezie
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid-passwd
auth_param basic childred 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users

header_access Allow allow all
header_access Authorization allow all
header_access WWW-Authenticate allow all
header_access Proxy-Authorization allow all
header_access Proxy-Authenticate allow all
header_access Cache-Control allow all
header_access Content-Encoding allow all
header_access Content-Length allow all
header_access Content-Type allow all
header_access Date allow all
header_access Expires allow all
header_access Host allow all
header_access If-Modified-Since allow all
header_access Last-Modified allow all
header_access Location allow all
header_access Pragma allow all
header_access Accept allow all
header_access Accept-Charset allow all
header_access Accept-Encoding allow all
header_access Accept-Language allow all
header_access Content-Language allow all
header_access Mime-Version allow all
header_access Retry-After allow all
header_access Title allow all
header_access Connection allow all
header_access Proxy-Connection allow all
header_access User-Agent allow all
header_access Cookie allow all
header_access All deny all

acl ip1 myip x.x.x.x

tcp_outgoing_address x.x.x.x ip1

acl user1 proxy_auth manilodisan


acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443        # https
acl SSL_ports port 563        # snews
acl SSL_ports port 873        # rsync
acl Safe_ports port 80        # http
acl Safe_ports port 21        # ftp
acl Safe_ports port 443        # https
acl Safe_ports port 70        # gopher
acl Safe_ports port 210        # wais
acl Safe_ports port 1025-65535    # unregistered ports
acl Safe_ports port 280        # http-mgmt
acl Safe_ports port 488        # gss-http
acl Safe_ports port 591        # filemaker
acl Safe_ports port 777        # multiling http
acl Safe_ports port 631        # cups
acl Safe_ports port 873        # rsync
acl Safe_ports port 901        # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
icp_access allow all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:        1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern .        0    20%    4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
hosts_file /etc/hosts
coredump_dir /var/spool/squid


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-12 Thread Amos Jeffries

Tory M Blue wrote:

Squid 2.7Stable7
F12
AUFS on a ext3 FS
6gigs ram
dual proc
cache_dir aufs /cache 32000 16 256

FilesystemSize  Used Avail Use% Mounted on
/dev/vda2  49G  3.8G   42G   9% /cache

configure options:  '--host=i686-pc-linux-gnu'
'--build=i686-pc-linux-gnu' '--target=i386-redhat-linux'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info'
'--exec_prefix=/usr' '--libexecdir=/usr/lib/squid'
'--localstatedir=/var' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-arp-acl'
'--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,getpwnam,multi-domain-NTLM,SASL,squid_radius_auth'
'--enable-ntlm-auth-helpers=no_check,fakeauth'
'--enable-digest-auth-helpers=password,ldap'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--with-large-files'
'--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--enable-esi' '--with-aio'
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl'
'--with-openssl' '--with-pthreads' 'build_alias=i686-pc-linux-gnu'
'host_alias=i686-pc-linux-gnu' 'target_alias=i386-redhat-linux'
'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -march=i386
-mtune=i686' 'LDFLAGS=-pie'

No load to speak of, very little iowait. Threads were configured as the default.

This is running on a striped pair of SSD's and is only a test script
that (ya it's hitting it a bit hard), but nothing that squid nor my
hardware should have an issue with.


What exactly is this test script doing then?

How many requests, of what type, is it pumping into squid? over what 
time period?
 (going by the cachemgr dump below I see your Squid is processing an 
average 1190 requests per second. but how intensive the peak load is 
unknown.)


If the disks can take it, you could bump the --with-aufs-threads=N up a 
bit and raise the ceiling. The default is 16 per cache_dir configured.


Queue congestion starts appearing if there are 8 operations queued and 
not yet handled by the IO threads. The limit is then doubled (to 16) 
before the next warning appears. And so on...


Congestion can be hit easily is a sudden peak in load until Squid 
adjusts to your regular traffic. If the queues fill too much you get a 
more serious Disk I/O overloading instead.





I've searched and really there does not appear to be a solid answer,
except running out of cpu or running out of iops, neither appears to
be the case here. Figured if it was a thread issue, I would see a
bottleneck on my server? (ya?). Also the if it only happens a couple


Queue congestion is what I'd call a leaf thread/process bottleneck.


of times ignore it. This is just some testing and I believe this
congestion is possibly causing the 500 errors I'm seeing while running
my script.


Which are?



Any pointers, where to look etc? (2.7stable6 on fc6/xen kernel) had no
such issues (yes, the SSD's are a new variable (but otherwise
identical hardware).

Thanks
Tory

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  13.130.00   40.664.800.00   41.41

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
vda1539.50 11516.00  6604.00  23032  13208

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  13.600.00   38.29   11.080.00   37.03

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
vda1385.07 11080.60 0.00  22272  0


[...@cache01 ~]$ free
 total   used   free sharedbuffers cached
Mem:   5039312 4217684617544  0  50372 183900
-/+ buffers/cache: 1874964851816
Swap:  7143416  07143416

Totals since cache startup:
sample_time = 1266055655.813016 (Sat, 13 Feb 2010 10:07:35 GMT)
client_http.requests = 810672
client_http.hits = 683067
client_http.errors = 0
client_http.kbytes_in = 171682
client_http.kbytes_out = 2145472
client_http.hit_kbytes_out = 1809060
server.all.requests = 127606
server.all.errors = 0
server.all.kbytes_in = 321960
server.all.kbytes_out = 38104
server.http.requests = 127606
server.http.errors = 0
server.http.kbytes_in = 321960
server.http.kbytes_out = 38104

Re: [squid-users] Squid reverse in front of an OWA webmail

2010-02-12 Thread Amos Jeffries

Alejandro Facultad wrote:
Dear all, I have a private client network accessing an OWA server 
(Outlook Web Access over Exchange) through a Squid proxy in reverse mode:




CLIENT NETWORK -(HTTP)SQUID---(HTTP)---OWA



No SSL at all in any path.



The data are:



IP_client_network: 192.168.0.0/16

IP_squid: 10.1.1.1

IP_owa: 10.2.2.2

Domain_name_owa: www.correo.gb



I've done this main configuration in squid.conf:



https_port 10.1.1.1:80 defaultsite=www.correo.gb



https_port?

You should be using:
  http_port 10.1.1.1:80 accel defaultsite=www.correo.gb



cache_peer 10.2.2.2 parent 80 0 no-query originserver login=PASS name=owaServer
acl OWA dstdomain www.correo.gb

cache_peer_access owaServer allow OWA

never_direct allow OWA



# lock down access to only query the OWA server

http_access allow OWA

http_access deny all

miss_access allow OWA

miss_access deny all



After that when I access through a web browser from the client network 
and type http://www.correo.gb, I don't succed and the access.log from 
squid tell me this:




192.168.0.22 TCP_MISS/302 584 GET http://www.correo.gb/ - 
FIRST_UP_PARENT/owaServer text/html




Please can you help me or give me a more explicit howto on this topic ???


Hi Alejandro,

  That log line shows success. Squid passed the request from the client 
192.168.0.22 on to OWA at 10.2.2.2 (aka 'owaServer') and received a 
302 reply which was passed to the client at 192.168.0.22.


Was there perhapse some message in the reply page that would lend more 
clue as to what is happening?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16