Re: [squid-users] HDD Configuration Recommendations

2008-09-26 Thread Matus UHLAR - fantomas
 Hmm, is squid still unable to work if one of cache dirs has problems?
 sounds like calling for bug report ;)

On 26.09.08 17:52, Amos Jeffries wrote:
 It's already reported long ago. Made it onto the worklist for Squid-3 
 recently. Should be done Someday Soon Now (tm) :-).

If you mean bug 410, I wasn't sure if it's the same... quite possibly it is
but for full cache_dir (which the bug is about) there may be other ways to
handle it...

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Christian Science Programming: Let God Debug It!.


Re: [squid-users] Re: cannot browse website

2008-09-26 Thread Amos Jeffries
Upgrade your Squid. 2.5 is rather broken with interception and 
acceleration modes.


After upgrading to a later squid. Remove the NAT interception hack. 
These two How-To's tell you everything you need to get started.



For reverse-proxy (accelarating) of websites using Squid:
  http://wiki.squid-cache.org/SquidFaq/ReverseProxy

For interception of outbound network port 80 traffic:
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
or http://wiki.squid-cache.org/ConfigExamples/Intercept


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Re: cannot browse website

2008-09-26 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
this is my squid
using apt-get install squid from latest ubuntu


[EMAIL PROTECTED]:/home/mirza# squid -v
Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid'
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid'
'--enable-async-io' '--with-pthreads'
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter'
'--enable-arp-acl' '--enable-epoll'
'--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests'
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm' '--enable-carp'
'--enable-follow-x-forwarded-for' '--with-large-files'
'--with-maxfd=65536' 'i386-debian-linux'
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux'
'target_alias=i386-debian-linux' 'CFLAGS=-Wall -g -O2'
'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='


On Fri, Sep 26, 2008 at 2:59 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Upgrade your Squid. 2.5 is rather broken with interception and acceleration
 modes.

 After upgrading to a later squid. Remove the NAT interception hack. These
 two How-To's tell you everything you need to get started.


 For reverse-proxy (accelarating) of websites using Squid:
  http://wiki.squid-cache.org/SquidFaq/ReverseProxy

 For interception of outbound network port 80 traffic:
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
 or http://wiki.squid-cache.org/ConfigExamples/Intercept


 Amos
 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE9




-- 
-=-=-=-=


Re: [squid-users] HDD Configuration Recommendations

2008-09-26 Thread F-D. Cami
On Fri, 26 Sep 2008 17:52:13 +1200
Amos Jeffries [EMAIL PROTECTED] wrote:

 Matus UHLAR - fantomas wrote:
  John Doe ha scritto:
  two disks = RAID 0 or 1
 
  RAID 1 is mirroring:
  - Pros: safe (goes on even with a dead HD), fast reads (from both disks)
  - Cons: you only use 50% of total HD space (500GB total in your case).
 
  RAID 0 is stripping:
  - Pros: fast reads/writes and you use 100% of total HD (1TB)
  - Cons: unsafe (you lose 1 HD, you lose everything).
 
  Or just don't use RAID and create a cache_dir on each HD...
  Best would be RAID1 for the system and no RAID for the cache_dirs I think.
  
  On 25.09.08 11:39, Marcello Romani wrote:
  I would add that a dead or malfunctioning drive could harm service 
  uptime if the caache dirs are not on raid1.
  Therefore I would suggest keeping everything on raid1.
 
 The three setups which are usable with Squid and RAID are:
 
 RAID 1 + singe cache_dir - handles HDD failure silently. At cost of half 
 the disk space. Q: is your cache big enough or bandwidth important 
 enough to warrant saving the cache data?
 
 no-RAID + multi cache_dir - Twice the cache space. At cost of Squid goes 
 down with either HDD. BUT, can be manually restarted without that 
 cache_dir immediately on detection.
 
 RAID 0 + single cache_dir - already been covered. Generally considered 
 worse than no RAID.

Depending on the expected load on squid, running with few users on a fast
SAS / SCSI (probably not SATA though) RAID 5 array is perfectly fine too.
Caveat emptor : I do not run an ISP :)

My own advice is, if you need squid to be fast, multiple cache_dir on
separate drives is the way to go. If you need uptime, you have to use
either RAID1 or RAID5 for those cache_dirs. If you need uptime and have
a limited number of users, a single cache_dir on a RAID5 partition is OK.
If you need speed and uptime, maybe multiple cache_dirs on multiple RAID1s
would work, but I never went that route.

Evaluate your load (number of users, speed of connections to users, speed
of Internet connection), your needs (speed / uptime), build for uptime and
see if it handles the load. 

François


[squid-users] Bad Method

2008-09-26 Thread Dean, Barry
I am using Squid 3 as a transparent proxy on a NAT server to limit access to 
the Intermahweb.

It's part of a Network Access Control solution, pre-registered users are stuck 
in a private VLAN.

Squid is seeing a lot of Bad Request Methods, that look like:

squid[]: [ID 702911 daemon.notice] clientParseRequestMethod: Unsupported method 
in request '__*__U___OX_ ___BU'_gA%__p_'


Is this something trying to do HTTPS on port 80?

Can I stop squid logging these, because I know already!

We have a script locally that picks up on unusual entries in log files and 
mails them to me, I don't want telling anymore!

Thanks

---
Barry Dean
Networks Team
Computing Services Department
Web: http://pcwww.liv.ac.uk/~bvd/
---
Nice boy, but about as sharp as a sack of wet mice.
-- Foghorn Leghorn




[squid-users] How can i remove an entry from the current cache using squid client?

2008-09-26 Thread Paulo Lopes
I've installed squid and cached 2 requests, and I can see then using:

[EMAIL PROTECTED] squid]# /usr/sbin/squidclient -p 80
cache_object://localhost/objects
HTTP/1.0 200 OK
Server: squid/2.7.STABLE4
Date: Fri, 26 Sep 2008 08:35:34 GMT
Content-Type: text/plain
Expires: Fri, 26 Sep 2008 08:35:34 GMT
X-Cache: MISS from test
Via: 1.0 test:80 (squid/2.7.STABLE4)
Connection: close
 
KEY 134E77B5F13E86B8585D7FE0AF1CE79E
GET http://127.0.0.1/app/servlet?p1=992567224p2=2.4
STORE_OK  IN_MEMORY SWAPOUT_DONE PING_DONE
CACHABLE,DISPATCHED,VALIDATED
LV:1222351389 LU:1222351521 LM:-1EX:-1
0 locks, 0 clients, 3 refs
Swap Dir 0, File 
inmem_lo: 0
inmem_hi: 718
swapout: 718 bytes queued
 
KEY 7FB72FC4992B0B2642793622D4C67347
GET http://127.0.0.1/app/servlet?p1=992567224p2=2.2
STORE_OK  IN_MEMORY SWAPOUT_DONE PING_DONE
CACHABLE,DISPATCHED,VALIDATED
LV:1222351389 LU:1222351521 LM:-1EX:-1
0 locks, 0 clients, 3 refs
Swap Dir 0, File 0X01
inmem_lo: 0
inmem_hi: 519
swapout: 519 bytes queued
 
KEY 3F7E6EB1215D6456CB2C6576D4465E9D
GET cache_object://localhost/objects
STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_NONE
RELEASE_REQUEST,PRIVATE,VALIDATED
LV:-1LU:1222418134 LM:-1EX:1222418134
3 locks, 1 clients, 1 refs
Swap Dir -1, File 0X
inmem_lo: 0
inmem_hi: 1042
swapout: 0 bytes queued
Client #0, 0x88733d8
copy_offset: 1042
seen_offset: 1042
copy_size: 4096
flags:
 
 
Now say I'd like to remove the 1st entry I do:
 
Squidclient -p 80 -m PURGE
http://127.0.0.1/app/servlet?p1=992567224p2=2.4;
 
But I get a 404 and nothing is really purged. How can I purge it?
 
Cheers,
Paulo


This e-mail message contains information which is confidential and may be 
privileged. It is intended for use by the addressee only. If you are not the 
intended addressee, we request that you notify the sender immediately and 
delete or destroy this e-mail message and any attachment(s), without copying, 
saving, forwarding, disclosing or using its contents in any other way. TomTom 
N.V., TomTom International BV or any other company belonging to the TomTom 
group of companies will not be liable for damage relating to the communication 
by e-mail of data, documents or any other information.


Re: [squid-users] Re: cannot browse website

2008-09-26 Thread Henrik Nordstrom
On fre, 2008-09-26 at 10:33 +0700, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

 While trying to retrieve the URL: http://riset.gpi-g.com/
 
 The following error was encountered:
 
 * Connection to 202.169.51.119 Failed
 
 The system returned:
 
 (111) Connection refused

Which means networking issues, assuming 202.169.51.119 is the right
address Squid should be connecting to.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Share authenticated sessions between two Squid severs

2008-09-26 Thread Pau Villarragut
Hi,

I want to build a cluster with two nodes of Squid Server. I have enable 
authentication to an Active Directory database via ntlm_auth. 

It's possible to share the user authenticated sessions between nodes??


Thanks,
.-Pau


Re: [squid-users] How to disable cache and verify, also performance issues

2008-09-26 Thread Leonardo Rodrigues Magalhães



Nick Duda escreveu:

Ok, I've done this but how can I verify that the cache is not active. store.log 
is showing lots of activity, all GET requests.
  


   disable store.log !!! In almost cases, it's useless 

   the machine that is running squid is running something else ??? 
maybe other service running on the same machine can be compromissing 
some I/O as well 


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






[squid-users] Hardware placement

2008-09-26 Thread Johnson, S

I've been digging around for an answer on this and am trying to figure out the 
best layout for attempting a WCCP2/Squid transparent proxy.

I've done several installs of Cisco WCCP2 using Bluecoat's proxy, but this 
would be a much cheaper method.

The hardware layout of Bluecoat was like the following (the way I did it 
before):


USER Workstation
    |
    |
    Cisco--Bluecoat(WCCP)-Win2k3 DC
    |
    |
    |
   Internet


The HTTP packet was transferred to the Cisco which was then forwarded to 
Bluecoat for validation.


The configurations I seem to be finding on the net for SQUID/WCCP are like the 
following:

User Workstation
    |
    |
    Cisco
    |
    |Win2k3(LDAP)
    |
Bluecoat(WCCP)
    |(nat)
    |
    |
   Internet


What I'm trying to accomplish is that only my SQUID server can talk to my AD 
environment.  It's a weird situation in that this is a public network that is 
still being authenticated to our private side.  In other words, our students 
are going to be bringing in their computers but we don't want them to touch our 
private network in any form.

Can anyone make any recommendations/suggestions?

Thanks much.
  Scott


Re: [squid-users] Bad Method

2008-09-26 Thread Amos Jeffries

Dean, Barry wrote:

I am using Squid 3 as a transparent proxy on a NAT server to limit access to 
the Intermahweb.

It's part of a Network Access Control solution, pre-registered users are stuck 
in a private VLAN.

Squid is seeing a lot of Bad Request Methods, that look like:

squid[]: [ID 702911 daemon.notice] clientParseRequestMethod: Unsupported method in 
request '__*__U___OX_ ___BU'_gA%__p_'


Is this something trying to do HTTPS on port 80?


Looks that way. Provided port 80 is the only one you are intercepting 
into Squid.




Can I stop squid logging these, because I know already!

We have a script locally that picks up on unusual entries in log files and 
mails them to me, I don't want telling anymore!


There is your answer, you need to adjust the script to ignore those 
warnings.


Unsupported method in request .*___.* seems like a good pattern to 
pick them out with.



Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Re: cannot browse website

2008-09-26 Thread Amos Jeffries

??? ??z?up??? ?z??? ??? wrote:

this is my squid
using apt-get install squid from latest ubuntu


[EMAIL PROTECTED]:/home/mirza# squid -v
Squid Cache: Version 2.6.STABLE18


Okay, your config file was full of configuration things that only work 
in 2.5.


The wiki pages I sent are correct for your squid 2.6.




On Fri, Sep 26, 2008 at 2:59 PM, Amos Jeffries [EMAIL PROTECTED] wrote:

Upgrade your Squid. 2.5 is rather broken with interception and acceleration
modes.

After upgrading to a later squid. Remove the NAT interception hack. These
two How-To's tell you everything you need to get started.


For reverse-proxy (accelarating) of websites using Squid:
 http://wiki.squid-cache.org/SquidFaq/ReverseProxy

For interception of outbound network port 80 traffic:
 http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
or http://wiki.squid-cache.org/ConfigExamples/Intercept


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9








--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] How to disable cache and verify, also performance issues

2008-09-26 Thread Amos Jeffries

Leonardo Rodrigues Magalhães wrote:



Nick Duda escreveu:

Ok, I've done this but how can I verify that the cache is not active.


If the only cache_dir entry in squid.conf says cache_dir null /tmp 
squid can't save stuff to disk.


Then you have a choice, of whether to allow in-memory caching for some 
items or not.


Set cache_mem to the amount of space in-memory objects are allowed to 
use for caching (0 KB to disable that too).



Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] HDD Configuration Recommendations

2008-09-26 Thread Amos Jeffries

Matus UHLAR - fantomas wrote:

Hmm, is squid still unable to work if one of cache dirs has problems?
sounds like calling for bug report ;)


On 26.09.08 17:52, Amos Jeffries wrote:
It's already reported long ago. Made it onto the worklist for Squid-3 
recently. Should be done Someday Soon Now (tm) :-).


If you mean bug 410, I wasn't sure if it's the same... quite possibly it is
but for full cache_dir (which the bug is about) there may be other ways to
handle it...



I did. A great solution is still being looked for.

Single object read failures are already recoverable (object is erased 
and replaced).


The write failures are currently fatal as it happens in parallel to 
network on large objects. And particularly bad if its a disk-full fatal 
like the bug was about.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] How can i remove an entry from the current cache using squid client?

2008-09-26 Thread Amos Jeffries

Paulo Lopes wrote:

I've installed squid and cached 2 requests, and I can see then using:

[EMAIL PROTECTED] squid]# /usr/sbin/squidclient -p 80
cache_object://localhost/objects
HTTP/1.0 200 OK
Server: squid/2.7.STABLE4
Date: Fri, 26 Sep 2008 08:35:34 GMT
Content-Type: text/plain
Expires: Fri, 26 Sep 2008 08:35:34 GMT
X-Cache: MISS from test
Via: 1.0 test:80 (squid/2.7.STABLE4)
Connection: close
 
KEY 134E77B5F13E86B8585D7FE0AF1CE79E

GET http://127.0.0.1/app/servlet?p1=992567224p2=2.4
STORE_OK  IN_MEMORY SWAPOUT_DONE PING_DONE
CACHABLE,DISPATCHED,VALIDATED
LV:1222351389 LU:1222351521 LM:-1EX:-1
0 locks, 0 clients, 3 refs
Swap Dir 0, File 
inmem_lo: 0
inmem_hi: 718
swapout: 718 bytes queued
 
KEY 7FB72FC4992B0B2642793622D4C67347

GET http://127.0.0.1/app/servlet?p1=992567224p2=2.2
STORE_OK  IN_MEMORY SWAPOUT_DONE PING_DONE
CACHABLE,DISPATCHED,VALIDATED
LV:1222351389 LU:1222351521 LM:-1EX:-1
0 locks, 0 clients, 3 refs
Swap Dir 0, File 0X01
inmem_lo: 0
inmem_hi: 519
swapout: 519 bytes queued
 
KEY 3F7E6EB1215D6456CB2C6576D4465E9D

GET cache_object://localhost/objects
STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_NONE
RELEASE_REQUEST,PRIVATE,VALIDATED
LV:-1LU:1222418134 LM:-1EX:1222418134
3 locks, 1 clients, 1 refs
Swap Dir -1, File 0X
inmem_lo: 0
inmem_hi: 1042
swapout: 0 bytes queued
Client #0, 0x88733d8
copy_offset: 1042
seen_offset: 1042
copy_size: 4096
flags:
 
 
Now say I'd like to remove the 1st entry I do:
 
Squidclient -p 80 -m PURGE

http://127.0.0.1/app/servlet?p1=992567224p2=2.4;
 
But I get a 404 and nothing is really purged. How can I purge it?


Don't quote the URL.

Otherwise thats correct.

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Share authenticated sessions between two Squid severs

2008-09-26 Thread Amos Jeffries

Pau Villarragut wrote:

Hi,

I want to build a cluster with two nodes of Squid Server. I have enable authentication to an Active Directory database via ntlm_auth. 


It's possible to share the user authenticated sessions between nodes??


Not that I ever heard of.

HTTP contains nothing such as a 'session'. So normal auth is sent on 
every single request.


NTLM gets around that by authenticating not the request, but the TCP 
link itself. You cannot have two servers and a client on the same link.


You can do NTLM auth on both squid though, and have the user 
authenticate with whichever one its trying to use at the time.


keep-alive + persistent connections also come into play to keep a whole 
series of requests from client to a single squid going down the same 
authenticated link.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Hardware placement

2008-09-26 Thread Amos Jeffries

Johnson, S wrote:

I've been digging around for an answer on this and am trying to figure out the 
best layout for attempting a WCCP2/Squid transparent proxy.

I've done several installs of Cisco WCCP2 using Bluecoat's proxy, but this 
would be a much cheaper method.

The hardware layout of Bluecoat was like the following (the way I did it 
before):


USER Workstation
|
|
Cisco--Bluecoat(WCCP)-Win2k3 DC
|
|
|
   Internet


The HTTP packet was transferred to the Cisco which was then forwarded to 
Bluecoat for validation.


The configurations I seem to be finding on the net for SQUID/WCCP are like the 
following:

User Workstation
|
|
Cisco
|
|Win2k3(LDAP)
|
Bluecoat(WCCP)
|(nat)
|
|
   Internet


What I'm trying to accomplish is that only my SQUID server can talk to my AD environment. 
 It's a weird situation in that this is a public network that is still being 
authenticated to our private side.  In other words, our students are going to be bringing 
in their computers but we don't want them to touch our private network in any form.

Can anyone make any recommendations/suggestions?

Thanks much.
  Scott


WCCP part is quite easy.
  htp://wiki.squid-cache.org/ConfigExamples/Intercept

The authentication is not. It's a browser security feature not to 
authenticate against unknown machines.


Simple IP-based access controls are still perfectly usable though.

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


[squid-users] Recommendations for URL filtering

2008-09-26 Thread Johnson, S
Anyone have recommendations for a URL filtering list through squid?

 Regards,
   Scott


Re: [squid-users] latency issues squid2.7 WCCP

2008-09-26 Thread Adrian Chadd
uhm, running without cache would mean don't use any disk storage
I'd suggest trying to run squid with no aufs cache_dir lines, just the
NULL line (cache_dir null /). This rules out the disk storage as a
potential candidate for failure.



Adrian

2008/9/25 Ryan Goddard [EMAIL PROTECTED]:

 Thanks for the response, Adrian.
 Is recompile required to change to internal DNS?
 I've disabled ECN, pmtu_disc and mtu_probing.
 cache_dir is as follows:
 (recommended by Henrik)

 cache_dir aufs /squid0 125000 128 256 cache_dir aufs /squid1 125000 128
 256
 cache_dir aufs /squid2 125000 128 256
 cache_dir aufs /squid3 125000 128 256
 cache_dir aufs /squid4 125000 128 256
 cache_dir aufs /squid5 125000 128 256
 cache_dir aufs /squid6 125000 128 256
 cache_dir aufs /squid7 125000 128 256

 No peak data available, here's some pre-peak data:
 Cache Manager menu
 5-MINUTE AVERAGE
 sample_start_time = 1222199580.85434 (Tue, 23 Sep 2008 19:53:00 GMT)
 sample_end_time = 1222199905.507274 (Tue, 23 Sep 2008 19:58:25 GMT)
 client_http.requests = 268.239526/sec
 client_http.hits = 111.741117/sec
 client_http.errors = 0.00/sec
 IOSTAT shows lots of idle time - I'm unclear what you mean by
 profiling ?
 Also, have not tried running w/out any cache - can you explain
 how this is done?

 appreciate the assistance.
 -Ryan



 Adrian Chadd wrote:

 Firstly, you should use the internal DNS code instead of the external
 DNS helpers.

 Secondly, I'd do a little debugging to see if its network related -
 make sure you've disabled PMTU for example, as WCCP doesn't redirect
 the ICMP needed. Other things like Window scaling negotiation and such
 may contribute.

 From a server side of things, what cache_dir config are you using?

 Whats your average/peak request rate? What about disk IO? Have you
 done any profiling? Have you tried running the proxy without any disk
 cache to see if the problem goes away?

 ~ terabyte of cache is quite large; I don't think any developers have
 a terabyte of storage in a box this size in a testing environment.

 2008/9/24 Ryan Goddard [EMAIL PROTECTED]:

 Squid 2.7.STABLE1-20080528 on Debian Linux 2.6.19.7
 running on quad dual-core 2.6mhz Opterons with 32 gig RAM; 8x140GB disk
 partitions
 using WCCP L2 redirects transparently from a Cisco 4948 GigE switch

 Server has one GigE NIC for the incoming redirects and two GigE NICs for
 outbound http requests.
 Using IPTables to port forward HTTP to Squid; no ICP, auth, etc.;
 strictly a
 web cache using heap/LFUDA replacement
 and 16GB memory allocated with mem pools on, no limit.

 Used in an ISP environment, accommodating approx. 8k predominately cable
 modem customers during peak.

 Issue we're experiencing is some web pages taking in excess of 20 seconds
 to
 load, marked latency for customers
 running web-based speed tests, etc.
 Cache.log and Access.log aren't indicating any errors or timeouts; system
 operates 96 DNS instances and 32k file descriptors
 (neither has gotten maxed yet).
 General Runtime Info from Cachemgr taken during pre-peak usage:
 Start Time:Tue, 23 Sep 2008 18:07:37 GMT
 Current Time:Tue, 23 Sep 2008 21:00:49 GMT

 Connection information for squid:
  Number of clients accessing cache:3382
  Number of HTTP requests received:2331742
  Number of ICP messages received:0
  Number of ICP messages sent:0
  Number of queued ICP replies:0
  Request failure ratio: 0.00
  Average HTTP requests per minute since start:13463.4
  Average ICP messages per minute since start:0.0
  Select loop called: 11255153 times, 0.923 ms avg
 Cache information for squid:
  Request Hit Ratios:5min: 42.6%, 60min: 40.0%
  Byte Hit Ratios:5min: 21.2%, 60min: 18.6%
  Request Memory Hit Ratios:5min: 18.3%, 60min: 17.2%
  Request Disk Hit Ratios:5min: 33.6%, 60min: 33.3%
  Storage Swap size:952545580 KB
  Storage Mem size:8237648 KB
  Mean Object Size:40.43 KB
  Requests given to unlinkd:0
 Median Service Times (seconds)  5 min60 min:
  HTTP Requests (All):   0.19742  0.12106
  Cache Misses:  0.27332  0.17711
  Cache Hits:0.08265  0.03622
  Near Hits: 0.27332  0.16775
  Not-Modified Replies:  0.02317  0.00865
  DNS Lookups:   0.09535  0.04854
  ICP Queries:   0.0  0.0
 Resource usage for squid:
  UP Time:10391.501 seconds
  CPU Time:4708.150 seconds
  CPU Usage:45.31%
  CPU Usage, 5 minute avg:33.29%
  CPU Usage, 60 minute avg:33.36%
  Process Data Segment Size via sbrk(): 1041332 KB
  Maximum Resident Size: 0 KB
  Page faults with physical i/o: 4
 Memory usage for squid via mallinfo():
  Total space in arena:  373684 KB
  Ordinary blocks:   372642 KB809 blks
  Small blocks:   0 KB  0 blks
  Holding blocks:216088 KB 21 blks
  Free Small blocks:  0 KB
  Free Ordinary blocks:1041 KB
  Total in use:  588730 KB 100%
  Total free:  1041 KB 0%
  Total size:   

Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-26 Thread Adrian Chadd
Well, what are the complete request/reply headers for each of the
requests you're testing with?


Adrian

2008/9/25 BUI18 [EMAIL PROTECTED]:
 My Squid Version is 2.6/STABLE14

 Here's my refresh_pattern from squid.conf

 #Suggested default:
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440

 #The following line will ignore a client no-cache header
 #refresh_pattern -i \.vid$   0   90% 2880 ignore-reload
 refresh_pattern -i \.vid$   7200100%10080 ignore-reload

 refresh_pattern .   0   20% 4320

 A link to the file looks something like this -- 
 http://ftp.mydomain.com/websites/data/myvideofile.vid

 I have to set up a station to grab the header but I can tell you that it does 
 not seem out of the ordinary.

 There is one cache-control:  Pragma: no-cache

 I believe I handle this with the ignore-reload options.

 Our server is an IIS server running on Windows 2003.

 I also ran a test with min and max age of 0 and 1 respectively, and it seems 
 to work.  I receive a TCP_REFRESH_HIT, which is what I would have expected as 
 these files do not change.

 Please let me know if you have any other ideas on how to track down why it 
 would release from cache before min age with no Expiration set on the object.

 Open to any suggestions.
 Thanks




 - Original Message 
 From: Michael Alger [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Sent: Wednesday, September 24, 2008 8:09:50 AM
 Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max

 On Wed, Sep 24, 2008 at 05:29:52AM -0700, BUI18 wrote:
 I went through your same thinking as you described below.

 I checked the Expires header from the server and we do not set
 one.  I checked via Fiddler web debug tool.  I also verified with
 the dev guys here regarding no Expires header.  I have set the min
 and max via refresh_pattern because of the absence of the Expires
 header thinking that Squid would keep it FRESH.

 Notice the -1 for expiration header (I do not set one on the
 object).  My min age is 5 days so I'm not sure why the object
 would be released from cache in less than 2 days.

 If the object was released from cache, when the user tried to
 access file, Squid reports TCP_REFRESH_MISS, which to me means
 that it was found in cache but when it sends a If-Modified-Since
 request, it thinks that the file has been modified (which it was
 not as seen by the lastmod date indicated in the store.log below.

 Interesting that it's caching the file for 2 days. What are the full
 headers returned with the object? Any other cache control headers?

 Is there any chance you have a conflicting refresh_pattern, so the
 freshness rules being applied aren't the ones you're expecting? May
 be worth doing some tests with very small max ages to confirm it's
 matching the right rule.








Re: [squid-users] HDD Configuration Recommendations

2008-09-26 Thread Alex Rousskov
On Fri, 2008-09-26 at 17:52 +1200, Amos Jeffries wrote:
 no-RAID + multi cache_dir - Twice the cache space. At cost of Squid goes 
 down with either HDD.

but see below
 
  Hmm, is squid still unable to work if one of cache dirs has problems?
  sounds like calling for bug report ;)
  
 
 It's already reported long ago. Made it onto the worklist for Squid-3 
 recently. Should be done Someday Soon Now (tm) :-).

and there is already a patch for COSS and Squid2. For more details and
to track progress, please see

http://wiki.squid-cache.org/Features/CacheDirFailover

HTH,

Alex.



RE: [squid-users] Reverse proxy with LDAP authentication

2008-09-26 Thread Andrew Struiksma
  Here is the main part of my config:
 
  http_port 80 defaultsite=site.company.org https_port 443
  cert=/etc/ssl/certs/company.org.cert \
  key=/etc/ssl/certs/company.org.key \
  defaultsite=site.company.org
 
  cache_peer site.company.org parent 443 0 no-query \
  originserver ssl sslflags=DONT_VERIFY_PEER name=myAccel acl
  our_sites dstdomain site.company.org acl all src 0.0.0.0/0.0.0.0
 
  auth_param basic program /usr/lib/squid/ldap_auth \
  -R -b dc=company,dc=org -D
  cn=squid_user,cn=Users,dc=company,dc=org \
  -w password -f sAMAccountName=%s -h 192.168.1.2
 auth_param
  basic children 5 auth_param basic realm Our Site auth_param basic
  credentialsttl 5 minutes
 
  acl ldap_users proxy_auth REQUIRED
 
  http_access allow ldap_users
  http_access allow our_sites

 If I understand you correctly that should be:

  http_access allow our_sites ldap_users
  http_access deny all

  cache_peer_access myAccel allow our_sites
 
  Andrew
 

 That config should be do it.
 Perhapse a never_direct allow our_sites to prevent
 non-peered traffic.

OK. I'll add in those options. Currently, if a user connects on port 80 they 
are not forwarded to port 443 until after logging in and actually clicking a 
link on the website. They then are prompted to login a second time on port 443. 
Can Squid redirect to port 443 immediately before login or do I need to setup 
Apache to do this?

Can I add in an ACL to permit users from certain IP ranges to access the site 
with having to authenticate to LDAP? I'm thinking about sending all users 
through Squid but I don't want to force users on our LAN to have to 
authenticate.

Thanks!

Andrew


[squid-users] Re: latency issues squid2.7 WCCP

2008-09-26 Thread Ryan Thoryk
The latency should be your disk caches, and I'm also assuming that the
sheer size of them could be contributing to it.  Also remember that
RAID-0 (or similar) won't help improve performance (since it's access
times that you need, not throughput).  We have a much smaller load here
(one machine peaks at around 1100 users), and switching from 7200rpm
SATA drives to 15k SCSI drives solved a lot of latency issues.

One thing you can do is use the max_open_disk_fds value; we found that
our SATA machines had major performance issues when over 50 file
descriptors were currently open.  That parameter tells Squid to bypass
the disk cache if the number of open fd's is over that value (that would
definitely help during peak times).  You can find the current number of
open fd's in the Store Disk file open value in your cachemgr general
runtime page.

Also I'd recommend greatly decreasing the size of your disk caches, and
increasing the cache_mem value (since you have 32 gigs of RAM, I'd
probably try to get the Squid process up to around 30 gigs).

Ryan Thoryk

Ryan Goddard wrote:
 
 Thanks for the response, Adrian.
 Is recompile required to change to internal DNS?
 I've disabled ECN, pmtu_disc and mtu_probing.
 cache_dir is as follows:
 (recommended by Henrik)
 cache_dir aufs /squid0 125000 128 256 cache_dir aufs /squid1 125000
 128 256
 cache_dir aufs /squid2 125000 128 256
 cache_dir aufs /squid3 125000 128 256
 cache_dir aufs /squid4 125000 128 256
 cache_dir aufs /squid5 125000 128 256
 cache_dir aufs /squid6 125000 128 256
 cache_dir aufs /squid7 125000 128 256
 
 No peak data available, here's some pre-peak data:
 Cache Manager menu
 5-MINUTE AVERAGE
 sample_start_time = 1222199580.85434 (Tue, 23 Sep 2008 19:53:00 GMT)
 sample_end_time = 1222199905.507274 (Tue, 23 Sep 2008 19:58:25 GMT)
 client_http.requests = 268.239526/sec
 client_http.hits = 111.741117/sec
 client_http.errors = 0.00/sec
 IOSTAT shows lots of idle time - I'm unclear what you mean by
 profiling ?
 Also, have not tried running w/out any cache - can you explain
 how this is done?
 
 appreciate the assistance.
 -Ryan



[squid-users] Cannot Access Site w/ Squid 2.6 Stable 3 Transparent Mode

2008-09-26 Thread Brodsky, Jared S.
Hi all,

I am running Squid 2.6 Stable 3 in Transparent mode and none of my users
can access msnbc.com from behind the our cache.  The cache box itself
has no problem reaching the site via wget, lynx, or telnet.  The strange
part is that if you have a direct url to one of their CSS files it loads
fine when behind squid. I can also telnet into msnbc.com from machines
behind the proxy as well.  I have added into my conf file the following
which had no effect:

acl msnbc dstdomain .msnbc.msn.com
cache deny msnbc

I have tried this with no luck as well  
http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#head-699d810035c0
99c8b4bff21e12bb365438a21027

Note: msnbc.com redirects to www.msnbc.msn.com.  
We can get to msn.com just fine, as well as cnbc.com.  I think there is
a problem w/ my conf file with the rewrite statements I have in
conjunction w/ how msnbc redirects their traffic.  I have attached my
conf file below.

Any help would be greatly appreciated.


http_port 81 transparent tproxy
http_port 3128
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem  525 MB
cache_swap_low 93
cache_swap_high 95
maximum_object_size 300 MB
maximum_object_size_in_memory  100 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /var/spool/squid/ 20480 16 256
access_log /var/log/squid/access.log
log_fqdn on
ftp_user [EMAIL PROTECTED]
ftp_list_width 64
hosts_file /etc/hosts
acl adzapports myport 81
acl adzapmethods method HEAD GET
url_rewrite_access deny !adzapmethods
url_rewrite_access allow adzapports
refresh_pattern ^ftp:   144020% 10080   reload-into-ims
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320reload-into-ims
refresh_pattern cgi-bin 0   0%  0
refresh_pattern \?  0   0%  0
refresh_pattern .   0   20% 4320
refresh_pattern (/cgi-bin/|\?) 0 0% 0
refresh_pattern .0 20% 4320
quick_abort_min 64 KB
quick_abort_max 512 KB
quick_abort_pct 50
range_offset_limit 1 MB
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 873 # rsync
acl purge method PURGE
acl CONNECT method CONNECT
refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache
override-expire ignore-private
quick_abort_min -1 KB
acl youtube dstdomain .youtube.com
cache allow youtube
hierarchy_stoplist cgi-bin ?
cache allow all
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl gtn_lan src 10.1.1.0/24
acl gtn_lan2 src 10.100.1.0/24
http_access allow gtn_lan
http_access allow gtn_lan2
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
tcp_outgoing_address 10.100.1.2
log_access deny localhost
log_access allow all
cache_mgr [EMAIL PROTECTED]
mail_from [EMAIL PROTECTED]
cache_effective_group proxy
httpd_accel_no_pmtu_disc on
append_domain .greatertalent.com
memory_pools_limit 64 MB
via off
forwarded_for off
snmp_port 3401
acl snmp_public snmp_community public
acl snmp_probes src 10.1.1.0/24
acl snmp_probes src 10.100.1.0/24
snmp_access allow snmp_public localhost snmp_probes
snmp_access deny all
strip_query_terms off
coredump_dir /var/spool/squid
pipeline_prefetch on






[squid-users] multiple web ports squid not working?

2008-09-26 Thread jason bronson
I've got an issue where I have multiple ports one webserver is on port
80 and one is on 21080
anyhow 21080 works fine
port 80 from the outside world doesnt work at all i get a blank
index.php file returned from the browser to download?

So i run tcpdump on port 80 and i see connections coming in but squid
is not writing anything to the logs even with full debugging?

I run wget from my squid server to see if it can talk with the
webserver and it returns the 21080 webserver page???

what bothers me is I'd think at this point the outside world would at
least see the 21080 server not a blank index file returned? and I'd
think something would write in squids logs?

Please if anyone knows what im doing shoot me a hint !

Im running
/usr/local/squid/sbin/squid -v
Squid Cache: Version 2.7.STABLE3
configure options:


heres my configuration

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.108.0.0/24# RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl Safe_ports port 3128
acl Safe_ports port 21080
acl CONNECT method CONNECT
http_access allow all
http_access allow manager localhost
http_access deny manager
http_access allow localnet
http_access deny all
icp_access allow localnet
icp_access deny all
http_port 80 accel defaultsite=64.132.59.237
http_port 21080 accel defaultsite=64.132.59.237
hierarchy_stoplist cgi-bin ?
access_log /usr/local/squidserver/var/logs/access.log squid
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
negative_ttl 0 seconds
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
visible_hostname 127.0.0.1
coredump_dir /usr/local/squidserver/var/cache
cache_peer 10.108.50.39 parent 21080 0 no-query originserver name=mybox
cache_peer 10.108.30.82 parent 80 0 no-query originserver name=webapps
cache_peer_access webapps allow all
cache_peer_access mybox allow all
cache_peer_access webapps deny all
cache_peer_access mybox deny all


[squid-users] Expires: vs. Cache-Control: max-age

2008-09-26 Thread Chris Woodfield

Hi,

Can someone confirm whether Expires: or Cache-control: max-age  
parameters take precedence when both are present in an object's  
headers? My assumption would be Cache-control: max-age would be  
preferred, but we're seeing some behavior that suggests otherwise.


Specifically, we're seeing Expires: headers in the past resulting in  
refresh checks against our origin even when a Cache-Control: max-age  
header is present and the cached object should be fresh per that metric.


What we're seeing is somewhat similar to bug 2430, but I want to make  
sure what we're seeing isn't expected behavior.


Thanks,

-Chris