RE: [squid-users] squid cachehttp hits oid for solarwinds

2009-04-16 Thread Gregori Parker
The (counter32) OID is .1.3.6.1.4.1.3495.1.3.2.1.2.0 in every version that I've 
looked at in the past few days, but I cant check every version ever, so YMMV.

Best way to make sure it's right is to give your squid/share/mib.txt to Orion 
and then just reference it as cacheHttpHits


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Thursday, April 16, 2009 3:56 AM
To: Ghasem Abbasi
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid cachehttp hits oid for solarwinds

Ghasem Abbasi wrote:
 Hi  Dear 
 
 I want Add  Squid Cache to Solarwinds Orion For Monitor Performance , But I 
 cant Found Oid For This Work.
 
 Please Help Me  
 

squid-data-dir/mib.txt

Content depends on your version of quid installed.
File location may vary if installed with a distro packaging system.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
   Current Beta Squid 3.1.0.7


RE: [squid-users] SNMP MIB updates?

2009-04-15 Thread Gregori Parker
Thanks for the reply Amos, I agree with your statements and am glad that this 
might get placed on the someday-roadmap for Squid.  I may not have permission 
to send to squid-dev, so please send it on if it doesn't find its way.

I have been working off of the squid/share/mib.txt MIB that came with the 
3.0-STABLE13 build I'm currently running on most systems.

cachePeerTable should be constructed using standard integer index, initialized 
on first run and adjusted as configuration changes and gets reloaded, with one 
of the OIDs returning the IP as a label.  So, I build and configure squid and 
run it for the first time with 3 cache peers configured, they get indexed as 
1,2,3 on the table...I reconfigure squid and remove all 3 peers (peers == 
parents and/or siblings, something that needs to be decided as well), replacing 
them with new ones - at this point you can either rebuild the table using the 
new peers or append them as 4,5,6 and blank out 1,2,3.  Cisco switches build 
their ifIndex table using the latter method, which works well when linecards 
are added or removed (granted, switchports in general are a bit more static 
than an application level configuration).

Also, I have tried the -Cc options when snmpwalk-ing and one big problem I run 
into is that I have two parents configured with the same IP (different 
hostname)...this causes snmpwalk to get stuck endlessly grabbing the same OID.  
Something like Cacti wont even begin to handle this table gracefully, so it's 
essentially unusable.

cacheHtcp* is great...but that would just make me want a cacheCarp as well.  
Perhaps you could just abstract whatever is being used under something like 
cacheSiblingProto?

In regards to adding a cacheHttpMisses (and pending, and negative) - I noticed 
that the cacheIpCache table has an OIDs for misses, pending hits and negative 
hits, so why cant the cacheProtoAggregateStats have these as well for HTTP?  
I've ran into cacti templates that get this elusive metric by subtracting 
cacheHttpHits from cacheProtoClientHttpRequests.

In regards to cacheMemUsage, I'm just interested in seeing a cacheMemCacheUsage 
added.  This would be especially useful for diskless caches...there's a 
cacheSysVMsize that tells me how much total memory can be used for caching, but 
nothing that tells me how much is actually used.  Seeing these metrics graphed 
over time would help determine optimal high/low swap values.  MemUsage is 
currently an integer OID counting in KB - that should be changed to a Counter32 
and represented in bits.

In regards to bits vs KB, everything everywhere is represented in bits, except 
for Squid...which is no big deal, except that it requires Cacti users to build 
in some extra math (result = value * 1024 * 8).  This is very low hanging fruit 
IMO.

Not sure what to say about the CPU usage metric, perhaps it's not refreshing 
often enough (if it's meant to be a gauge).  Perhaps it could be indexed into 
time-averages similar to the service timers, i.e. 1 min, 5 min and 60 min 
averages.  Shouldn't be too difficult to do.

Regarding the differences between the cacheProtoAggregateStats and cacheIpCache 
tables.  I can share graphs with you offline, but the curves graph out to be 
exactly the same, the numbers are just way off.  For example I graph HTTP 
Requests per second using data from the cacheProtoAggregateStats table and I 
see a current of 350 rps (and about 310 hits per second), graphing IP Requests 
per second using data from the cachIpCache table I see a current of 1190 rps 
(and about 1150 hits per second).  Notice here that the differences match up 
perfectly, and the deltas are always the same, the IP table just counts a LOT 
more hits and requests over time than the HTTP/ProtoAggStats table does.  I 
cant account for the difference, so a detailed definition would help me a lot.  
I'm going to try turning off ICP/HTCP and seeing if there is any difference.  
If you want to see my graphs for a better idea of what I'm saying, I can attach 
them and send off-list.

Thanks guys,
Gregori



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, April 15, 2009 5:23 AM
To: Gregori Parker
Cc: squid-users@squid-cache.org; Squid Developers
Subject: Re: [squid-users] SNMP MIB updates?

Gregori Parker wrote:
 I was creating a fresh batch of cacti graph templates for Squid the other day 
 (focused on reverse proxy setups, I will release them soon), and while 
 crawling the Squid MIB I noticed that HTCP metrics don't register anywhere.  
 Furthermore, the entire MIB seems to be in need of updating - here's a list 
 of things I would like to understand or see updated at some point...
 

Excellent to see someone working on that update and the squid SNMP stuff 
too. Thank you.

In answer to your points below, please retain followup to squid-dev 
mailing list (cc'd) about any further on these.

Firstly which of the _3_ Squid MIB are you trying to get updated

[squid-users] SNMP MIB updates?

2009-04-14 Thread Gregori Parker
I was creating a fresh batch of cacti graph templates for Squid the other day 
(focused on reverse proxy setups, I will release them soon), and while crawling 
the Squid MIB I noticed that HTCP metrics don't register anywhere.  
Furthermore, the entire MIB seems to be in need of updating - here's a list of 
things I would like to understand or see updated at some point...

* cachePeerTable should be re-created so that it doesnt index by ip address 
(results in OID not increasing error when walking!)
* update cacheIcp* to register HTCP now that it is built in by default
* add a cacheHttpMisses (and pending, and negative) to cacheProtoAggregateStats
* more detailed memory counters - the current cacheMemUsage doesnt seem to 
measure how much memory is being used for caching (in my diskless cache setups, 
the counter flatlines around 600MB when I know there is much more than that 
being used)
* cacheCpuUsage is constant at 8% across a variety of squid servers at all 
times - I can see that this doesnt match up with what I see locally via top or 
in my normal unix cpu graphs.
* throughput should be measured in bits instead of kilobytes throughout the MIB

Btw, I've been trying to understand the differences between the 
cacheProtoAggregateStats and cacheIpCache tables - I get very different numbers 
in terms of requests, hits, etc and I cant account for it.

Thanks in advance,
Gregori



RE: [squid-users] Reverse Proxy + Multiple Webservers woes

2009-04-07 Thread Gregori Parker
You need to add the vhost option to http_port so that Squid determines
parent via hostname

i.e.

http_port 80 accel defaultsite=example.com vhost
cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 dev.example.com
cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 example.com

*** NOTE: if you have DNS for example.com resolving to Squid, then make
sure you override that in /etc/hosts on the squid boxes, pointing those
records to your origins so that you don't run into a loop.

For ACLs, I would recommend the following:

acl your_site1 dstdomain example.com
acl your_site2 dstdomain dev.example.com
acl origin1 dst 192.168.1.114
acl origin2 dst 192.168.1.115
acl acceleratedPort port 80

cache allow your_site1
cache allow your_site2
http_access allow origin1 acceleratedPort
http_access allow origin2 acceleratedPort
http_access deny all


GL, HTH

- Gregori


-Original Message-
From: Karol Maginnis [mailto:nullo...@sdf.lonestar.org] 
Sent: Tuesday, April 07, 2009 11:30 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Reverse Proxy + Multiple Webservers woes

Hello,

I am new to squid but not new to reverse proxies.  I am trying to 
implement a proxy that would work like this:

www.example.com - server 1
example.com - server 1
dev.example.com - server 2

I have read the wiki here:
wiki.squid-cache.org/SquidFaq/ReverseProxy

But I cant get it to work and I am about to pull my hair out.

My squid.conf looks like:

http_port 80 accel defaultsite=example.com
cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 dev.example.com
cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 example.com


This gives me a big fat: Access Denied

So I added this to my squid.conf:
---
acl our_sites dstdomain example.com dev.example.com
http_access allow our_sites
---

This clears the Access Denied however now all traffic goes to
server_1 
(the .115 addy).

I have tried all sorts of cute ACLs included but not limited to
delcaring 
ACSs for server_1 and server_2 respectively and allowing access to 
server_1 from server_1 sites and denying server_2 sites and vice versa. 
However this just gives me an Access Denied for all sites.

I have also tired every example found on this issue in the Wiki.  I feel

like the Wiki is leaving out a key config line that is causing this not
to 
work, but I could be wrong.

I am runnig squid:
Squid Cache: Version 2.7.STABLE6
configure options:  '--disable-internal-dns'

I hate sending such a simple question to a mailing list but I have read 
the squid wiki so much that I almost have it memorized as far as the 
ReverseProxy pages are concerned.

Thanks,
-KJ

nullo...@sdf.lonestar.org
SDF Public Access UNIX System - http://sdf.lonestar.org


RE: [squid-users] ...Memory-only Squid questions

2009-04-06 Thread Gregori Parker
Glad to help David, please let us know how it progresses.
 
Dont know if you saw this in the archives: 
http://www.mail-archive.com/squid-users@squid-cache.org/msg19824.html but it 
might help guide you on your SO_FAIL issue.  It might be worth moving to LRU 
and establishing a baseline of performance (using either SNMP+cacti or 
cachemgr) before moving to fancier replacement policies.  Personally, I would 
go 'store_log none' and not worry about it unless you see something in cache.log
 
The best all around advice I can give on Squid is to start simple!  Once 
everything works the way you expect, then start tweaking your way into 
complexity with a means to track the (in)effectiveness of each change you make 
(and a known good configuration that you can always go back to when you 
inevitably fubar the thing!).
 
- Gregori



From: David Tosoff [mailto:dtos...@yahoo.com]
Sent: Mon 4/6/2009 8:46 PM
To: squid-users@squid-cache.org; Chris Robertson
Subject: Re: [squid-users] ...Memory-only Squid questions




Thanks Chris.

I had already read both of the wiki post and the thread you directed me to 
before I posted this to the group.

I already had compiled heap into my squid before this issue happened. I am 
using heap GDSF. And, I wasn't able to find --enable-heap-replacement as a 
compile option in './configure --help' ... perhaps it's deprecated?? Is it a 
still a valid compile option for 3.0 stable 13?

In any event, a gentleman named Gregori Parker responded and helped me with 
some suggestions and I've managed to stabalize the squid at ~20480 MB cache_mem

The only thing I seem to be missing now is the SO_FAIL issue.
Correct me if I'm wrong, but I assume 'SO' stands for 'Swap Out'... But how 
does this affect a system where there is nowhere for the squid to swap out to 
(cache_dir null /tmp)...?

Thanks for all your help so far.

Cheers,

David

--- On Mon, 4/6/09, Chris Robertson crobert...@gci.net wrote:

 From: Chris Robertson crobert...@gci.net
 Subject: Re: [squid-users] ...Memory-only Squid questions
 To: squid-users@squid-cache.org
 Received: Monday, April 6, 2009, 4:56 PM
 David Tosoff wrote:
  Hey all, haven't heard anything on this and could
 really use some help. :)
 
  You can disregard the HIT related questions, as once I
 placed this into a full scale test, it started hitting from
 memory wonderfully (~40% offload from the origin)
   

 Good news...

  The config works great, to a point. It fills up my
 memory up, but keeps going way past the
 cache_mem that I set.

 http://wiki.squid-cache.org/SquidFaq/SquidMemory

   I've dropped it down to 24GB, but it chews up all
 the memory on the system (32GB) and then continues into the
 swap and chews that up too. At that point, squid hangs,
 crashes then reloads and the cache has to spend another few
 hours building everything up into memory again. Like I said
 though, it works great...until the mem is full...
  I'm now going to test with a 4GB cache_mem and see
 what she does.
 
  Can anyone offer any suggestions for the best, most
 stable way of running a memory-only cache? is 'cache_dir
 null /tmp' actually what I want to be using here?

 Yes.

   The SO_FAIL's concern me, but I'm not sure if
 they should?
   

 Perhaps
 http://www.mail-archive.com/squid-users@squid-cache.org/msg19824.html
 gives some insight.  Are you using a
 (cache|memory)_replacement_policy that you didn't
 compile support for?

  Thanks!
 
  David

 Chris


  __
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your 
favourite sites. Download it now
http://ca.toolbar.yahoo.com http://ca.toolbar.yahoo.com/ .




RE: [squid-users] Squid log management questions

2009-03-10 Thread Gregori Parker
Adjust paths as necessary...

access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log

/etc/cron.hourly/rotatelogs.sh
#!/bin/bash
# this script rotates squid logs hourly and renames with a
timestamp
/usr/local/squid/sbin/squid -k rotate
sleep 10
mv /var/log/squid/access.log.0
/var/log/squid/access.${NOW:=$(date +%m-%d-%Y_%H)}.log
mv /var/log/squid/cache.log.0 /var/log/squid/cache.${NOW:=$(date
+%m-%d-%Y_%H)}.log
chmod +r /var/log/squid/*

/etc/cron.daily/archivelogs.sh
#!/bin/bash
# this script archives and compresses logs daily to
/var/log/squid/store
cd /var/log/squid/
tar -C /var/log/squid -zcf
/var/log/squid/store/$(hostname)_$(date -d yesterday +%m-%d-%Y).tgz
*.$(date -d yesterday +%m-%d-%Y)_*.log
chmod +r /var/log/squid/store/*
rm /var/log/squid/*.$(date -d yesterday +%m-%d-%Y)_*.log -f



-Original Message-
From: a bv [mailto:vbavbal...@gmail.com] 
Sent: Tuesday, March 10, 2009 1:31 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid log management questions

Hi list,

I need a bash script (which ill take it to the cron) for
archiving/storing the squid log files. At the enviroment there are
multiple squid running servers at which the log
files are at a different path then defaults. For now ill need this
script to  compress , rename the access.log files to logdate.tar.gz
and copy it with ftp to an other server. I need to archive all the
files. Cause there are not so much disk space at the servers ill have
an external usb disk (hope to have soon) and go on archiving the all
files to there.  (make this usb disk physically connected to a server
then
And also i have some questions:

How do you store, manage your squid files ? Which log files of squid
do you keep? How long do you keep this files? what is the source of
these choices, you,company or compliance issues?

Regards


RE: [squid-users] SQUID_MAXFD

2009-03-03 Thread Gregori Parker
Maximum amount of file descriptors.
 
Default depends on your platform, recommended amount depends on usage and 
objects.  I remember 1024 being a common default years back, and that probably 
remains a suitable number for most people.  I run 8192 just to be safe.
 
Read the following, but only worry about it if you feel you're in real danger 
of running out...
 
http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-eb3240fe8e61368056af86138a2b5dcbc9781a54
 
http://www.cyberciti.biz/faq/squid-proxy-server-running-out-filedescriptors/
 



From: ??? ??z?up??? ?z?i? ??? [mailto:mirz...@gmail.com]
Sent: Tue 3/3/2009 10:59 PM
To: Squid Users
Subject: [squid-users] SQUID_MAXFD



SQUID_MAXFD -- what this is mean ? and how much the default and
recommended number

--
-=-=-=-=
Personal Blog http://my.blog.or.id http://my.blog.or.id/  ( lagi belajar )
Hot News !!! : Pengin punya Layanan SMS PREMIUM ? Contact me ASAP.
dapatkan Share revenue MAXIMAL tanpa syarat traffic...




RE: [squid-users] Streaming is killing Squid cache

2009-03-01 Thread Gregori Parker
Better yet, implement a wsus server, let it bypass caching and gpo your users 
to update from that.  That way you can get away from having ms updates dictate 
caching options that result in problems with streaming.
 



From: Brett Glass [mailto:squid-us...@brettglass.com]
Sent: Sun 3/1/2009 8:02 AM
To: Amos Jeffries; Brett Glass
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Streaming is killing Squid cache



At 09:47 PM 2/28/2009, Amos Jeffries wrote:


Leaving min at -1, and max at something large (10-50MB?)

Should abort the streams when they reach the max value, You'll have to set the 
max to something reasonably higher than the WU cab size.
Service Packs may cause issues since they are 100MB each, but are infrequent 
enough to use a spider and cause caching if need be.

We've actually seen Microsoft updates as big as 800 MB.

Of course, this is a good argument for turning this setting into something 
that's controlled by an ACL, so one could say, Cache everything from 
Microsoft, but not from these streaming providers.

--Brett





RE: [squid-users] Streaming is killing Squid cache

2009-03-01 Thread Gregori Parker
I missed the part where he mentioned that this is a poor ISP with no control 
over their clients, so you'll have to pardon my fatal presumptuousness.  Hint: 
I'm rolling my eyes
 
It may seem marvelous, but there actually are a handful of places that run 
Windows...even on servers.  In that sort of environment, you're likely to find 
AD, in which case WSUS + GPO are both simple, sensible and _zero_ cost 
solutions for this problem.

 




From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Sun 3/1/2009 5:15 PM
To: Gregori Parker
Cc: Brett Glass; Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] Streaming is killing Squid cache



 Better yet, implement a wsus server, let it bypass caching and gpo your
 users to update from that.  That way you can get away from having ms
 updates dictate caching options that result in problems with streaming.


You are of course making a few very fatal assumptions:

 1) that every service provider with this issue can afford to run a
dedicated Windows server machine for this purpose.

 2) that they want to.
 (I for one marvel that people are still willing to run MS windows on ANY
server.)

 3) that they have Enterprise level of control over where their clients
machines get WU from. Hint: Tier 0-3 ISP have _zero_ control over client
machine settings.


Amos

 

 From: Brett Glass [mailto:squid-us...@brettglass.com]
 Sent: Sun 3/1/2009 8:02 AM
 To: Amos Jeffries; Brett Glass
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Streaming is killing Squid cache



 At 09:47 PM 2/28/2009, Amos Jeffries wrote:


Leaving min at -1, and max at something large (10-50MB?)

Should abort the streams when they reach the max value, You'll have to
 set the max to something reasonably higher than the WU cab size.
Service Packs may cause issues since they are 100MB each, but are
 infrequent enough to use a spider and cause caching if need be.

 We've actually seen Microsoft updates as big as 800 MB.

 Of course, this is a good argument for turning this setting into something
 that's controlled by an ACL, so one could say, Cache everything from
 Microsoft, but not from these streaming providers.

Hmm, thinking about this some more...

Maybe your fix is to cache deny X where X is an ACL defining the
streaming sources.  The abort logics apparently seem to only hold links
open if they are considered cacheable (due to headers and non-denial in
Squid).

Or perhapse you are hitting the one rare case where half_closed_clients
on is needed for now to make the abort kick in.

Amos






RE: [squid-users] Noob question about file types

2009-01-27 Thread Gregori Parker
Requests from the clients can and will at times include headers indicating that 
they will accept compressed responses...this command will strip that indication 
and ensure a uncompressed response.
 
Try it and report back



From: zlotvor [mailto:ztar...@gmail.com]
Sent: Mon 1/26/2009 10:32 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Noob question about file types





Gregori Parker wrote:

 Simplest solution (not that you arent keen!) would be to add the
 following to your squid.conf

   request_header_access Accept-Encoding deny all

 This will remove those pesky headers that some browsers send.

Manual says: This option only applies to request headers, i.e., from the
client to the server.
How would it solve my problem if it manipulates requests from client?

Zoltan
--
View this message in context: 
http://www.nabble.com/Noob-question-about-file-types-tp21674247p21680357.html
Sent from the Squid - Users mailing list archive at Nabble.com.





RE: [squid-users] Possible to Continue Serving Expired Objects When Source Becomes Unavailable?

2009-01-27 Thread Gregori Parker
Are there any plans to add the stale-if* options to the 3.0 train?  I'm
very interested in this option and would like to see its usage/effects
better documented

-Original Message-
From: Chris Woodfield [mailto:rek...@semihuman.com] 
Sent: Tuesday, January 27, 2009 7:29 AM
To: Tim McNerney
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Possible to Continue Serving Expired Objects
When Source Becomes Unavailable?

2.7 supports the stale-if-error cache-control directive, which will  
accomplish this goal. The only caveat (AFAIK) is that it will only  
continue to serve objects if origin returns a 500 server error or if  
the origin is unreachable; if the origin returns a 404 it will flush  
the object and pass the 404 through.

http://tools.ietf.org/html/draft-nottingham-http-stale-if-error-01

-C

On Jan 26, 2009, at 7:52 PM, Tim McNerney wrote:

 Is it possible to configure Squid so that if an object becomes stale  
 and the server tries to fetch a current copy of the object, but the  
 object/server is unavailable, it will continue serving the cached  
 version?

 If so, what other control can be used in this case? Say you want to  
 allow it to run an hour over expiration before purging it. Or you  
 wanted to set how often to retry the source server.

 This question is specifically for 2.6, but it would be great to know  
 if things have changed with newer versions.

 Thanks.

 --Tim




RE: [squid-users] refresh_pattern to (nearly) never delete cached files in a http accelerator scenario?

2009-01-26 Thread Gregori Parker
Your cache_dir is set to 2, this means you can only cache 20GB worth
of items...and assuming you're using default cache_swap_low value (90),
Squid will start removing old/stale items from the cache once you hit
90% of that 20GB (18GB more or less).  Recommendation for caching more
items?  Use a larger cache_dir setting (assuming you have the space to
use), and set your cache_swap_low/high values to something higher, like
96/98.

Also, make sure that this content is cacheable for long periods of time
(assuming static content)...just because an object is in the cache
doesn't mean it will be served from cache!  Check return headers from
the origin for expire time/max-age/cache-control (you can use this
http://www.ircache.net/cgi-bin/cacheability.py), and finally take look
at your refresh patterns, which will apply to content without cache
controlling headers.

However, I don't think you will ever get all 10TB of content cached by
Squid...unless: 1) your squid server has the 10TB necessary to cache
everything, 2) your content can be cached essentially forever, and 3)
everything has been requested at least once in order to get Squid's
cache fully populated.  Instead, you probably want to approach caching
as a means to save bandwidth costs on frequently requested content,
which means what you're doing right now is fine - your cache is fully
populated and Squid is continuing to do its job.  Just make sure you're
caching optimally: profile the hit rate, aim for around 80% or more,
depending on request patterns.

HTH


-Original Message-
From: Jamie Plenderleith [mailto:ja...@plenderj.com] 
Sent: Monday, January 26, 2009 10:53 AM
To: squid-users@squid-cache.org
Subject: [squid-users] refresh_pattern to (nearly) never delete cached
files in a http accelerator scenario?

Hi All,

I am using Squid as a HTTP Accelerator/reverse proxy. It is being used
to
cache the contents of a site that is being served up from a 1Mbps
internet
connection, but the proxy itself is hosted in Rackspace in the US.
Users visit the squid server, and if the item isn't there then it's
retrieved from our offices over the 1Mbps upstream.
I started running wget on another machine on the web to cache the
contents
of the site, and the cache on the proxy was growing and growing - but
only
to a certain point and then seemed to stop at about 170,000 files.

Below is the configuration that we've been using:

http_port 80 accel defaultsite=[our office's static IP]
cache_peer [our office's static IP] parent 80 0 no-query originserver
name=myAccel
cache_dir ufs c:/squid/var/cache 2 16 256
acl our_sites dstdomain [our office's static IP]
acl all src 0.0.0.0/0.0.0.0
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all
visible_hostname [hostname of proxy server]
cache_mem 1 GB
maximum_object_size 2 KB
maximum_object_size_in_memory 1000 KB

We tried some variations of the refresh_pattern configuration option,
but
our cache doesn't seem to grow beyond its current size.
There is about 10TB worth of data to cache, and the cache isn't going
past
17.3GB in size. I was logging the growth of the cache folder and you can
see
around 21/01/09 - 22/01/09 that while it was getting bigger it then
started
getting smaller.

15:21   19/01/091.65/1.99 (126,276 files)
23:22   19/01/092.99/3.35 (134,820 files)
01:23   20/01/093.73/4.10 (139,767 files)
02:33   20/01/094.17/4.54 (142,415 files)
11:17   20/01/097.42/7.82 (162,009 files)
12:37   20/01/097.92/8.33 (164,794 files)
13:08   20/01/098.10/8.52 (165,993 files)
19:42   20/01/099.39/9.82 (175,192 files)
23:17   20/01/0910.0/10.5 (179,588 files)
01:38   21/01/0910.5/10.9 (182,303 files)
02:24   21/01/0910.6/11.1 (183,209 files)
12:14   21/01/0912.5/13.0 (193,659 files)
17:54   21/01/0913.8/14.2 (200,816 files)
03:14   22/01/0915.6/16.1 (212,081 files)
16:54   22/01/0917.2/17.5 (155,725 files)
22:48   22/01/0917.3/17.6 (107,216 files)
17:07   23/01/0917.4/17.6 (107,246 files)
14:49   25/01/0917.3/17.6 (107,287 files)
18:48   26/01/0917.3/17.5 (103,780 files)

Any recommendations on how to ensure the proxy doesn't remove anything
from
cache?

Regards,
Jamie



RE: [squid-users] Noob question about file types

2009-01-26 Thread Gregori Parker
Simplest solution (not that you arent keen!) would be to add the
following to your squid.conf

request_header_access Accept-Encoding deny all

This will remove those pesky headers that some browsers send.


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, January 26, 2009 4:19 PM
To: zlotvor
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Noob question about file types

zlotvor wrote:
 Hi,
 
 Is possible to change the extension of the file that server sends to
 browser?
 Problem is that I have a web server(IIS) with svg graphics in pages,
but the
 server sends files with   .svg.gz extension instead of.svg
or   
 .svgz.And only IE can show that file directly, everything else
(firefox,
 konqueror, opera, etc.) just ask to save it. I cannot change the
behavior of
 the server, so the solution would be putting something between the
server
 and browser to change the extension of served file.
 
 Thanks in advance, Zoltan

If the browser indicates it accepts compressed versions, then fails to 
decompress you need something to decompress in transit. Simply changing 
the filename will result in error pages.

If you are keen you might want to try Squid-3.1 and the brand new 
compression eCAP library.
   http://wiki.squid-cache.org/Features/eCAP

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE12
   Current Beta Squid 3.1.0.4


RE: [squid-users] squid caching report

2009-01-16 Thread Gregori Parker
If you have snmp enabled, I would highly recommend setting up an instance of 
cacti (http://www.cacti.net/)

If you have trouble understanding the Cacti installation, I would recommend 
getting started with CactiEZ...it's an ISO that gets you up and going fast.
CactiEZ Download: http://mirror.cactiusers.org/downloads/CactiEZ-v0.4.tar.gz 
CactiEZ Documentation: http://cactiusers.org/wiki/CactiEZ 

Cacti templates for Squid: http://forums.cacti.net/about4142.html

I have Cacti doing all my reporting on Squid, and it's beautiful (and the execs 
love it, which doesn’t hurt...I can send you graph examples if you like)

- Gregori


-Original Message-
From: bijayant kumar [mailto:bijayan...@yahoo.com] 
Sent: Thursday, January 15, 2009 10:48 PM
To: squid users
Subject: [squid-users] squid caching report

Hello list,

I want to have reports about the squid performance like how much caching is 
being done by Squid, how much bandwidth is being saved by the squid cache by 
returning objects from cache. I thought of cache manager output but my seniors 
want to see reports in a less complex format something like graph based 
reports. I have configured the MRTG graph also for the squid but most of the 
stuff I am not able to understand. 
Is anything available on the Internet so that I can create some graphs/reports 
about the Squid performance? Any pointer will be highly useful for me.

Bijayant Kumar


  Get your preferred Email name!
Now you can @ymail.com and @rocketmail.com. 
http://mail.promotions.yahoo.com/newdomains/aa/


RE: [squid-users] SNMP OIDs

2009-01-16 Thread Gregori Parker
Google is your friend, search for Squid+OID...the following were in the
top 10 results:
http://www.linofee.org/~jel/proxy/Squid/oid.shtml
http://www.oidview.com/mibs/3495/SQUID-MIB.html

Keep in mind that a lot of these OIDs will end in .1, .5 and .60 (for 1
min, 5 min and hourly averages)

e.g.
requestHitRatioOneMin .1.3.6.1.4.1.3495.1.3.2.2.1.9.1
requestHitRatioFiveMin .1.3.6.1.4.1.3495.1.3.2.2.1.9.5
requestHitRatioHourly .1.3.6.1.4.1.3495.1.3.2.2.1.9.60

You can also just snmpwalk the 1.3.6.1.4.1.3495.1 tree



-Original Message-
From: Luis Daniel Lucio Quiroz [mailto:luis.daniel.lu...@gmail.com] 
Sent: Friday, January 16, 2009 9:54 AM
To: squid-users@squid-cache.org
Subject: [squid-users] SNMP OIDs

Hi,

We are trying to get rid fo a commercial reverse proxy, how ever, we
must get 
this data from SNMP.  I know that squid has snmp support, I've used, but
I 
dont know all oids.  Does any one has a link where oids are specified?

Regards,




RE: [squid-users] Squid consumes a lot dsk space

2009-01-05 Thread Gregori Parker
I'm surprised 30 users haven't consumed more than 65GB worth of internet
in that amount of time :)

Keep in mind that Squid will keep stale items in cache (not serving them
of course) until it hits its threshold (default 90-something percent
cache usage), because Squid doesn't want to waste time purging stale
objects until necessary.  See cache_swap_high/low parameters for more
information on these thresholds.  Also see
http://wiki.squid-cache.org/SquidFaq/InnerWorkings#head-3ccaef79f36bf2d7
4c7cdde76eeb163b8c8e691e to learn about Squid's cache replacement
algorithm.

If you still want to fine-tune, I would recommend putting some profiling
in place (see cachemgr or snmp) so you have a 'before' to compare
against when making changes.
http://wiki.squid-cache.org/SquidFaq/SquidProfiling 


-Original Message-
From: Wilson Hernandez [mailto:w...@msdrd.com] 
Sent: Monday, January 05, 2009 12:52 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid consumes a lot dsk space

Hello again.

I would like to fine tune squid so, that it won't cache so many things. 
I noticed that in less than a month a network with about 30 users 
consumed 65GB of harddrive. I don't think that's normal if it is please 
correct me.


RE: [squid-users] Problem configure squid 3.1

2009-01-05 Thread Gregori Parker
Sounds like you need a c++ compiler, do a 'apt-get gcc' (you're running
debian IIRC)

-Original Message-
From: Wilson Hernandez [mailto:w...@msdrd.com] 
Sent: Monday, January 05, 2009 1:50 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Problem configure squid 3.1

Hello.
Me again.

It seems that everyhting I try to do can't go smoothly. Now, I'm trying 
to get squid-3.1.0.3 installed in my system trying to upgrade from an 
older version but now come accross a problem when I run ./configure
I get the following error (I searched the internet but, can't get a 
solutions) :

checking for C++ compiler default output file name...
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
configure: error: ./configure failed for lib/libTrie

I removed the previous squid version which was installed as a package.

Please help.

Thanks.



RE: [squid-users] Problem configure squid 3.1

2009-01-05 Thread Gregori Parker
Try 'apt-get libc-dev' and report back

-Original Message-
From: Wilson Hernandez - MSD, S. A. [mailto:w...@msdrd.com] 
Sent: Monday, January 05, 2009 6:01 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Problem configure squid 3.1

I've already have it installed and still not working.

Gregori Parker wrote:
 Sounds like you need a c++ compiler, do a 'apt-get gcc' (you're
running
 debian IIRC)
 
 -Original Message-
 From: Wilson Hernandez [mailto:w...@msdrd.com] 
 Sent: Monday, January 05, 2009 1:50 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Problem configure squid 3.1
 
 Hello.
 Me again.
 
 It seems that everyhting I try to do can't go smoothly. Now, I'm
trying 
 to get squid-3.1.0.3 installed in my system trying to upgrade from an 
 older version but now come accross a problem when I run ./configure
 I get the following error (I searched the internet but, can't get a 
 solutions) :
 
 checking for C++ compiler default output file name...
 configure: error: C++ compiler cannot create executables
 See `config.log' for more details.
 configure: error: ./configure failed for lib/libTrie
 
 I removed the previous squid version which was installed as a package.
 
 Please help.
 
 Thanks.
 
 
 

-- 
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


RE: [squid-users] Problem configure squid 3.1

2009-01-05 Thread Gregori Parker
I'm sorry, I meant apt-get install libc-dev (I'm obviously not a Debian
user)

I've also read that you may need the 'build-essential' package as well,
so you might want to try that


-Original Message-
From: Gregori Parker [mailto:gregori.par...@theplatform.com] 
Sent: Monday, January 05, 2009 4:33 PM
To: w...@msdrd.com
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Problem configure squid 3.1

Try 'apt-get libc-dev' and report back

-Original Message-
From: Wilson Hernandez - MSD, S. A. [mailto:w...@msdrd.com] 
Sent: Monday, January 05, 2009 6:01 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Problem configure squid 3.1

I've already have it installed and still not working.

Gregori Parker wrote:
 Sounds like you need a c++ compiler, do a 'apt-get gcc' (you're
running
 debian IIRC)
 
 -Original Message-
 From: Wilson Hernandez [mailto:w...@msdrd.com] 
 Sent: Monday, January 05, 2009 1:50 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Problem configure squid 3.1
 
 Hello.
 Me again.
 
 It seems that everyhting I try to do can't go smoothly. Now, I'm
trying 
 to get squid-3.1.0.3 installed in my system trying to upgrade from an 
 older version but now come accross a problem when I run ./configure
 I get the following error (I searched the internet but, can't get a 
 solutions) :
 
 checking for C++ compiler default output file name...
 configure: error: C++ compiler cannot create executables
 See `config.log' for more details.
 configure: error: ./configure failed for lib/libTrie
 
 I removed the previous squid version which was installed as a package.
 
 Please help.
 
 Thanks.
 
 
 

-- 
*Wilson Hernandez*
Presidente
829.848.9595
809.766.0441
www.msdrd.com http://www.msdrd.com
Conservando el medio ambiente


RE: [squid-users] GET and POST Method Characters

2008-12-18 Thread Gregori Parker
A 12K URL is most likely an attempt to exploit, and should be denied.
Moving to 2.7/3.x will enable 8192 byte URLs, so this 12K URL you speak
of will still be denied...if you think it's legit and really want Squid
to proxy it, then you can redefine MAX_URL in inc/defines.h before
compiling, but keep in mind that this isn't tested or recommended.


-Original Message-
From: Mario Remy Almeida [mailto:malme...@isaaviation.ae] 
Sent: Thursday, December 18, 2008 3:21 AM
To: Amos Jeffries
Cc: Squid Users
Subject: Re: [squid-users] GET and POST Method Characters

OK thanks Amos,

the size of the requested URL is 12k and my squid version is 2.6STABLE20

I'll be moving to squid 2.7STABLE5 justing waiting for the new hardware.

any other suggestions.

//Remy

On Fri, 2008-12-19 at 00:03 +1300, Amos Jeffries wrote:
 Mario Remy Almeida wrote:
  Hi All,
  
  Can someone tell me what is the max number of characters allowed in
GET
  and POST method.
  
  When I access the below URL (mentioned in the access.log file) I get
  Invalid URL ERROR message int he browser
  
  message in access.log file
 
 snip huge URL
 
 Depends on your squid version. Older Squid have increased the limit
from 
 2KB to 4KB, and the most recent releases have bumped it again to 8KB.
 
 Amos



RE: [squid-users] issue with htcp support on squid

2008-12-09 Thread Gregori Parker
I ran into this as well when I upgraded from 2.6 to 3.0 (stable10) and
tried converting from icp to htcp in the process.  Squid would start up,
go for a little while, and then die in less than 2 minutes.  I resolved
this by adding 'no-query' to my cache_peer sibling statements (in
effect, turning off intercache communication).  I would like to know how
best to re-enable sibling communication, and whether or not I'm in the
same boat here (waiting for a bug fix).

relevant portion of config...

icp_port 0
htcp_port 4827
icp_query_timeout 500
cache_peer x.x.x.1 sibling 80 4827 htcp proxy-only no-query
cache_peer x.x.x.2 sibling 80 4827 htcp proxy-only no-query
cache_peer x.x.x.3 sibling 80 4827 htcp proxy-only no-query
acl siblings src x.x.x.1 x.x.x.2 x.x.x.3
http_access allow siblings
htcp_access allow siblings
htcp_access deny all

cache.log from that time - notice the neighborsUdpPing: There is no ICP
socket! towards the end...that's where Squid would die, and try to
revive itself, but never really get airborne

squid[19208]: Starting Squid Cache version 3.0.STABLE10 for
x86_64-unknown-linux-gnu...
squid[19208]: Process ID 19208
squid[19208]: With 16384 file descriptors available
squid[19208]: Performing DNS Tests...
squid[19208]: Successful DNS name lookup tests...
squid[19208]: DNS Socket created at 0.0.0.0, port 37033, FD 8
squid[19208]: Adding nameserver x.x.x.x from /etc/resolv.conf
squid[19208]: Adding nameserver x.x.x.x from /etc/resolv.conf
squid[19208]: Adding domain xx.com from /etc/resolv.conf
squid[19208]: Unlinkd pipe opened on FD 13
squid[19208]: Store logging disabled
squid[19208]: Swap maxSize 0 KB, estimated 0 objects
squid[19208]: Target number of buckets: 0
squid[19208]: Using 8192 Store buckets
squid[19208]: Max Mem  size: 4194304 KB
squid[19208]: Max Swap size: 0 KB
squid[19208]: Using Round Robin store dir selection
squid[19208]: Current Directory is /root
squid[19208]: Loaded Icons.
squid[19208]: Accepting accelerated HTTP connections at 0.0.0.0, port
80, FD 11.
squid[19208]: Accepting HTCP messages on port 4827, FD 12.
squid[19208]: Accepting SNMP messages on port 3401, FD 14.
squid[19208]: Configuring Parent x.x.x.x /80/0
squid[19208]: Configuring Sibling x.x.x.x /80/4827
squid[19208]: Configuring Sibling x.x.x.x /80/4827
squid[19208]: Configuring Sibling x.x.x.x /80/4827
squid[19208]: Ready to serve requests.
squid[19208]: Finished rebuilding storage from disk.
squid[19208]: 0 Entries scanned
squid[19208]: 0 Invalid entries.
squid[19208]: 0 With invalid flags.
squid[19208]: 0 Objects loaded.
squid[19208]: 0 Objects expired.
squid[19208]: 0 Objects cancelled.
squid[19208]: 0 Duplicate URLs purged.
squid[19208]: 0 Swapfile clashes avoided.
squid[19208]:   Took 0.28 seconds (  0.00 objects/sec).
squid[19208]: Beginning Validation Procedure
squid[19208]:   Completed Validation Procedure
squid[19208]:   Validated 25 Entries
squid[19208]:   store_swap_size = 0
squid[19208]: storeDirWriteCleanLogs: Starting...
squid[19208]:   Finished.  Wrote 0 entries.
squid[19208]:   Took 0.00 seconds (  0.00 entries/sec).
squid[19208]: neighborsUdpPing: There is no ICP socket!
squid[9938]: Squid Parent: child process 19208 exited with status 1
squid[9938]: Squid Parent: child process 19211 started
squid[19211]: Starting Squid Cache version 3.0.STABLE10 for
x86_64-unknown-linux-gnu...
(repeats)


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 09, 2008 2:04 PM
To: Bostonian
Cc: Chris Robertson; squid-users@squid-cache.org
Subject: Re: [squid-users] issue with htcp support on squid

tis 2008-12-09 klockan 13:19 -0800 skrev Bostonian:
 Thank you for your help, Chris.
 
 I turned on all the debug_options you suggested and found out the
 problem. By mistake I disabled
 icp_port to use htcp. The log file indicated that no ICP socket. The
 problem is solved now.

Please file a bug. HTCP operation is not supposed to need the ICP
socket.

Regards
Henrik



RE: [squid-users] compiling squid-3.1.0.2.tar.bz2

2008-12-01 Thread Gregori Parker
Read the release notes for 3.x, you'll see that --enable-snmp is no
longer a valid configure option as it's built by default (use
--disable-snmp if you don't want it)

Once you configure, your next step is to 'make all', followed by 'make
install'

HTH


-Original Message-
From: Saurabh Agarwal [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 01, 2008 11:00 AM
To: squid-users@squid-cache.org
Subject: [squid-users] compiling squid-3.1.0.2.tar.bz2

Hi All

Today I downloaded squid-3.1.0.2.tar.bz2 from
http://www.squid-cache.org/Download/ and then followed the following
instructions.

1. tar-jxvf squid-3.1.0.2.tar.bz2
2. cd squid-3.1.0.2
3. ./configure --enable-storeio=aufs,coss,diskd,null,ufs --enable-snmp
4. make gives an error. It doesn't work. I get following error

make: *** No targets specified and no makefile found.  Stop.

Can any one tell what's wrong here with the Makefile?

Regards,
Saurabh


RE: [squid-users] Recommended Store Size

2008-11-26 Thread Gregori Parker
Of course you can :)

The trick to making these adjustments is having a means to gauge their
benefit/detriment...personally, each of my squid servers have all their
metrics graphed in Cacti and generate a Calamaris report each night, so
I get good hard data that can be compared to a historical baseline.  Put
this kind of monitoring in place (especially cacti IMO), and you wont be
tied to rules of thumb.


-Original Message-
From: Stand H [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 26, 2008 12:40 AM
To: Squid Users
Subject: Re: [squid-users] Recommended Store Size

Hi Chris,

 The rule of thumb I've read previously is storage
 equivalent to a week's traffic.  If you pass an
 average of 30GB per day, a storage size of 210GB is a good
 start.

I have two squid servers. Each processes around 120GB a day with about
43% request hit ratio and 25% byte hit ratio. The cache size is 300GB
with 6GB memory. Per rule of thumb, can I increase my cache size?

Thank you.

Stand


  


RE: [squid-users] URGENT : How to limit some ext

2008-11-26 Thread Gregori Parker
And if every post is going to be 'life and death', urgent, asap, etc...you 
really need to get a test lab / virtual environment :)

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 26, 2008 12:23 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] URGENT : How to limit some ext

░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:
 how to limit .zip .swf only from squid.conf in 2 option

 1. Global Rule ( i mean all user will get this rule - limit on zip and swf )
 2. Individual Rule ( only certain ppl that listed )

 thx b4

 in urgent ASAP :(

 it's about dead and live :(
   

There's a whole FAQ section on ACLs...

http://wiki.squid-cache.org/SquidFaq/SquidAcl

Chris


RE: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Gregori Parker
I'm curious about this as well - so is the answer that siblings cannot
be queried for dynamic content and you need to use hierarchy_stoplist to
keep squid from trying?  Or is there a way to get ICP/HTCP to query
siblings with the entire URI, query arguments and all?  I have a very
similar setup and have been considering eliminating sibling
relationships altogether because of this...


-Original Message-
From: Steve Webb [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 19, 2008 12:54 PM
To: Chris Robertson
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] ICP queries for 'dynamic' urls?

That did it.  Thanks!

- Steve

On Wed, 19 Nov 2008, Chris Robertson wrote:

 Date: Wed, 19 Nov 2008 11:42:07 -0900
 From: Chris Robertson [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] ICP queries for 'dynamic' urls?
 
 Steve Webb wrote:
 Hello.
 
 I'm caching dynamic content (urls with ? and  in them) and
everything's 
 working fine with one exception.
 
 I'm seeing ICP queries for only static content and not dynamic
content even 
 though squid is actually caching dynamic content.
 
 Q: Is there a setting somewhere to ask squid to also do ICP queries
for 
 dynamic content like there was with the no-cache directive to
originally 
 not cache dynamic content (aka cgi-bin and ? content)?

 http://www.squid-cache.org/Doc/config/hierarchy_stoplist/

 
 I'm using squid version 2.5 (I know, I should upgrade to 3.x, but I'm

 trying to stick with the same versions across the board and I don't
have 
 time to run my config through QA with 3.0 at this time.  Please don't
tell 
 me to upgrade.)
 
 My cache_peer lines look like:
 
 cache_peer 10.23.14.4   sibling 80  3130  proxy-only
 
 This is for a reverse proxy setup.
 
 Dataflow is:
 
 Customer - Internet - Akamai - LB - squid - LB - apache - LB
- 
 storage
 
 The apache layer does an image resize (which I want to cache) and
the url 
 is http://xxx/resize.php?w=xxh=xx;...
 
 The storage layer is just another group of apache servers that
serve-up 
 the raw files.
 
 LB is a load-balancer.
 
 - Steve
 

 Chris


-- 
Steve Webb - Lead System Administrator for Pronto.com
Email: [EMAIL PROTECTED]  (Please send any work requests to:
[EMAIL PROTECTED])
Cell: 303-564-4269, Office: 303-497-9367, YIM: scumola


RE: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Gregori Parker
I understand all that and am not using or questioning the default
config.  My config lacks definition for hierarchy_stoplist completely,
which means it's defined as internal default (which should be nada).

What I'm asking is: are my inter-cache/sibling/ICP/HTCP queries
including full URI's or is it stripping at the '?' (i.e. s/?.*//) ?


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 19, 2008 2:40 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ICP queries for 'dynamic' urls?

Gregori Parker wrote:
 I'm curious about this as well - so is the answer that siblings cannot
 be queried for dynamic content and you need to use hierarchy_stoplist
to
 keep squid from trying?  Or is there a way to get ICP/HTCP to query
 siblings with the entire URI, query arguments and all?  I have a very
 similar setup and have been considering eliminating sibling
 relationships altogether because of this...
   

Way back when the web was young and dynamic content was rare,  query 
strings just about always indicated personalized, non-cacheable 
content.  Prior to version 2.7 and 3.1 (I think) Squid, by default did 
not even attempt to cache anything with cgi-bin or a question mark in 
the URL (acl QUERY urlpath_regex cgi-bin ?/no_cache deny QUERY).  Since 
this content was not cached, there was no reason to check if it is 
cached on siblings (hierarchy_stoplist cgi-bin ?).

If you are using the now recommended refresh_pattern (refresh_pattern -i

(/cgi-bin/|\?) 0 0% 0 ), dynamic content that provides freshness 
information can be cached (and that which doesn't, will not be) so 
removing the default hierarchy_stoplist might net you a few more hits.

Hope that clears it up.

Chris


RE: [squid-users] very basic question on enforcing use of proxy

2008-11-15 Thread Gregori Parker
You could enforce proxy-pac file via global policy, or depending on your 
network equipment, you may be able to do policy-based routing (route by port) 
and/or even wccp...there are a several ways to get squid inbetween your users 
and their http traffic that I would recommend exploring before doing 
transparent-mode anything.
 



From: Amos Jeffries [mailto:[EMAIL PROTECTED]
Sent: Sat 11/15/2008 3:32 AM
To: James Byrne
Cc: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: Re: [squid-users] very basic question on enforcing use of proxy



James Byrne wrote:
 you can use a firewall or you can put squid in transparent mode, and set
 up a transparent proxy.

Which requires a firewall, and additionally requires NAT for the
interception.

Yes, a firewall is the only way to prevent clients doing what they like
when connecting externally. Regardless of the connection type.


Amos



 On Nov 14, 2008, at 9:58 PM, qqq1one @yahoo.com wrote:

 Hi,

 I have a very basic question.  I don't even know what to search on for
 this question.  I have squid installed and running, but my browser can
 freely get out to the internet without going through the proxy.  I
 know about specifying the proxy in the browser, but what prevents an
 unconfigured browser from going straight out to the internet?  Is a
 firewall the only way to prevent this?

 Thanks in advance.







--
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.2




RE: [squid-users] Multiple site example

2008-11-14 Thread Gregori Parker
You only need one http_port statement with one defaultsite...define
multiple cache_peer parents, like so, and make sure you're acl's are
straight (this is the tricky aspect of reverse-proxy imo, getting the
security right)


http_port 80 accel defaultsite=bananas.mysite.com vhost
cache_peer 10.10.10.1 parent 80 0 no-query no-digest originserver
name=mysite1
cache_peer 10.10.10.2 parent 80 0 no-query no-digest originserver
name=mysite2
cache_peer 10.10.10.3 parent 80 0 no-query no-digest originserver
name=mysite3
cache_peer_domain mysite1 apples.mysite.com
cache_peer_domain mysite2 oranges.mysite.com
cache_peer_domain mysite3 bananas.mysite.com

acl my_site1 dstdomain apples.mysite.com
acl my_site2 dstdomain oranges.mysite.com
acl my_site3 dstdomain bananas.mysite.com
acl myaccelport port 80

cache allow my_site1
cache allow my_site2
cache allow my_site3

http_access allow my_site1 myaccelport
http_access allow my_site2 myaccelport
http_access allow my_site3 myaccelport


Personally, I use a load balancer to direct traffic to Squid, and have
the hostnames redefined in /etc/hosts to get traffic to the backend
servers

Hope that helps, YMMV

- Gregori

-Original Message-
From: Ramon Moreno [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 14, 2008 1:24 PM
To: Henrik Nordstrom
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Multiple site example

Henrik,

Thanks for the quick reply.

So I think this answers the cache peer question.

The other is what do I specify for the http_port section.

Currently I only am doing acceleration for one site:
http_port 80 accel defaultsite=bananas.mysite.com

How do I configure this parameter for 3 sites while using the same
port? I am guessing, but would it be something like this:
http_port 80 accel defaultsite=bananas.mysite.com vhost
http_port 80 accel defaultsite=apples.mysite.com vhost
http_port 80 accel defaultsite=oranges.mysite.com vhost




On Fri, Nov 14, 2008 at 1:12 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On fre, 2008-11-14 at 12:19 -0800, Ramon Moreno wrote:

 I know how to accelerate for one site based on the faq, however not
 too sure how to do multiple.

 It's also in the FAQ..

 Squid FAQ Reverse Proxy - Sending different requests to different
backend web servers

http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7bd155a1a9919bda8
ff10ca7d3831458866b72eb

 Regards
 Henrik



RE: [squid-users] parseHTTPRequest problem with SQUID3

2008-11-12 Thread Gregori Parker
So, do I need to file a bug report, so that this can get addressed?  Or
are the devs already aware?

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 11, 2008 5:56 PM
To: Gregori Parker
Cc: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] parseHTTPRequest problem with SQUID3


Increases in compatibility are in the release notes and ChangeLog
The regression in 0.9 support you hit is a bug.


 Is there any possibility of restoring 0.9 support in Squid3?  I can
 always have my load-balancer format the requests to contain the
 HTTP/1.0\n, but that seems like a real hidden gotcha for anyone
 migrating from 2.6 to 3.0 - which is fine, as long as it's called out
in
 the release notes.

Yes, it is a bug in both squid and the balancer. Squid is supposed to be
able to handle obsolete 0.9 anyway. We have to track it down and fix.
But its not to say that the load balancer itself isn't 'broke' for
sending
0.9 traffic.

Amos



RE: [squid-users] parseHTTPRequest problem with SQUID3

2008-11-11 Thread Gregori Parker

 Not fully 1.1, but from (0.9 + 1.0) to fully 1.0 + partial 1.1. Which
is
 weird because 2.6 went almost fully 1.0 as well quite a while back.

I wish changes like this were called out in the release notes

 always_direct prevents the requests going through peers. Nothing more.
 if the domain itself resolves to allow direct requests its okay, but
 accelerators should be setup so the domain resolves to Squid which can
 cause issues.

That was the intention...I don't want Squid checking siblings for
healthchecks, so I'll keep the always_direct line in addition to the
cache deny.

 Yes, to prevent storing them use 'cache deny HealthChecks'.
 To prevent logging use 'access_log ... !HealthChecks'

Done.  I already had the logging configured as such, just omitted it
from my message because it was extraneous to the discussion.

 Okay. That confirms my idea that the HealthChecks request is missing
 the 'HTTP/1.0' part of the request string. The first line of every
valid
 accelerated request should look something like this:
  GET /mgmt/alive HTTP/1.0\n

Is there any possibility of restoring 0.9 support in Squid3?  I can
always have my load-balancer format the requests to contain the
HTTP/1.0\n, but that seems like a real hidden gotcha for anyone
migrating from 2.6 to 3.0 - which is fine, as long as it's called out in
the release notes.

Thanks


RE: [squid-users] url length limit

2008-11-11 Thread Gregori Parker
Just a follow-up,

Squid3 didn't work as expected for me, so I tried recompiling
2.6-STABLE22 with the MAX_URL changed to 8192.

It's working great so far and have moved it into production with no
issues.

- Gregori


-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 09, 2008 2:34 AM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] url length limit

 It has been tried at 8192 with no sign of trouble in Squid-3.
 If your URL get much large than that, we really do need it checked up
as
 
 high as 128KB so feel free to build with larger values, just please 
 report back how it goes (particularly if good news).
 http://www.squid-cache.org/bugs/show_bug.cgi?id=2267
 
 I'd suggest experimenting with squid 3.1.0.1 to see if its usable in 
 your setup. The URL limits have been raised to 8KB already and
diskless 
 operation is much more polished and native.
 
 
 As for logging the URI, most things in squid are dropped when they are

 found to overflow the buffers like that.  The details may be logged to

 cache.log when debug_options is set to the right section and level.
I'm 
 not sure right now which one is relevant to 2.6 though, there are a
few 
 available.
 http://wiki.squid-cache.org/KnowledgeBase/DebugSections
 
 
 Amos



[squid-users] parseHTTPRequest problem with SQUID3

2008-11-10 Thread Gregori Parker
I've just rolled back a failed Squid migration from 2.6 to 3.0, and I'm
looking for reasons why it failed.  I have been successfully using the
latest Squid 2.6 to http-accel a pool of backend web servers, with a
load-balancer in front to direct traffic.

The load-balancer hits the squid server with a health check, i.e. GET
/mgmt/alive and expects an HTTP 200, before allowing it to have traffic.
When I turned up Squid3, all health checks failed...showing the
following in access.log:

1226355682.853  0 ip_of_load-balancer NONE/400 1931 GET
http://cached.whatever.com/ps/management/alive - NONE/- text/html
1226355684.875  0 ip_of_load-balancer NONE/400 1931 GET
http://cached.whatever.com/ps/management/alive - NONE/- text/html
1226355687.905  0 ip_of_load-balancer NONE/400 1931 GET
http://cached.whatever.com/ps/management/alive - NONE/- text/html

After some troubleshooting and turning debug_options up, it appears that
perhaps it's the request done without a hostname that's the problem,
because I see 'parseHttpRequest: Missing HTTP identifier' in cache.log
with debug_options set to ALL,3.

Squid 2.6 handled these fine, and my configuration hasnt changed, so was
there something introduced in Squid3 that demands a hostname?  I know
from packet captures that my load-balancer literally connects to the
squid server on port 80 and does a GET /mgmt/alive (not GET
http://cached.whatever.com/mgmt/alive)

Here are the relevant portions of my config:

http_port 80 accel defaultsite=cached.whatever.com vhost 
cache_dir null /tmp

cache_peer 1.1.1.1 parent 80 0 no-query no-digest originserver
name=Cached-Whatever
cache_peer_domain Cached-Whatever cached.whatever.com

acl our_site dstdomain cached.whatever.com
acl Origin-Whatever dst 1.1.1.1
acl acceleratedPort port 80
acl HealthChecks urlpath_regex mgmt/alive

always_direct allow HealthChecks
cache deny HealthChecks
cache allow Origin-Whatever
http_access allow Origin-Whatever acceleratedPort
http_access deny all
http_reply_access allow all

access_log /var/log/squid/access.log squid !HealthChecks
visible_hostname cached.whatever.com
unique_hostname squid03


Thanks - Gregori



RE: [squid-users] parseHTTPRequest problem with SQUID3

2008-11-10 Thread Gregori Parker
Thanks for your response

 That message means there was no HTTP/1.0 tag on the request line.
 Squid begins assuming HTTP/0.9 traffic.


 Squid 2.6 handled these fine, and my configuration hasnt changed, so
was
 there something introduced in Squid3 that demands a hostname?

 no.

Something has to have changed, because I ported my config over as-is
(aside from undefining the 'all' acl element, as specified in the
release notes)

For a minute I thought Squid had gone HTTP/1.1 and I needed my health
checks to supply a Host header, but my capture shows the response as:

P...HTTP/1.0.400.Bad.Request..Server:.squid/3.0.STABLE10..Mime-Versi
on:.1.0..Date:.Mon,.10.Nov.2008.22:49:53 (+content)


 acl our_site dstdomain cached.whatever.com
 acl Origin-Whatever dst 1.1.1.1
 acl acceleratedPort port 80
 acl HealthChecks urlpath_regex mgmt/alive
 always_direct allow HealthChecks

 This forces HealthChecks to take an abnormal path. Try just letting
them
 go the same way as regular accelerated request. It will be more
accurate
 to match the health of client requests.

I thought always_direct kept requests from being checked against the
cache/siblings?  I don't want them cached or logged, just proxied from
the origin - so keep 'cache deny HealthChecks' and dump the
'always_direct allow HealthChecks'?  I actually tried that during my
troubleshooting phase, and it didn't seem to change anything, but I
would to be using everything properly.


 cache deny HealthChecks
 cache allow Origin-Whatever
 http_access allow Origin-Whatever acceleratedPort

 I'd say the above two lines are the problem. Unless you are juggling
DNS
 perfectly to make clients resolve the domain as Squid, and squid
resolve
 the domain as web server, the 'dst' ACL will fail to work properly on
 accelerated requests.
 The dstdomain our_site should be used here instead.

I juggle, yes.  The load balancer uses a virtual IP, to which the
cached.whatever.com record points to, which pools traffic to my Squid
boxes.  I use /etc/hosts on the Squid boxes to point cached.whatever.com
to an internal virtual IP that pools traffic to my origin servers.  This
provides the flexibility and redundancy we need for this setup, and this
configuration has always worked fine with 2.6.

 Try the config fixes above, and if it still fails can you post a
complete
 byte-wise exact copy of the failing health check headers please?
 
 Amos

I did notice that if I edited my hosts file to point cached.whatever.com
to my new squid3 box, and requested
http://cached.whatever.com/mgmt/alive, I got my 200 response.  However
if I telnet'ed to the new squid3 box on port 80, typed 'GET /mgmt/alive'
and hit enter twice, I would get that 400.  That really leads me to
believe that a hostname is required, as opposed to problems with my
config.

Thanks again for your thoughts on this

- Gregori




RE: [squid-users] url length limit

2008-11-07 Thread Gregori Parker
So this has already been changed to 8192 bytes in the current
3.0-STABLE10 ?  I'd probably be willing to try that build, however these
are production servers, so I'm skeptical towards trying bleeding-edge
versions.  3.1.0.1 would be a very hard sell - can point me towards some
reading material on specific enhancements towards memory usage, diskless
operation, reverse-proxy, etc in the v3 branches?


-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 06, 2008 7:37 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] url length limit

Gregori Parker wrote:
 Hi all - I am using an array of squid servers to accelerate dynamic
 content, running 2.6.22 and handling a daily average of about 400
 req/sec across the cluster.  We operate diskless and enjoy a great hit
 rate (80%) on very short-lived content.
 
 About 50+ times per day, the following appears in my cache.log:
 
 squid[735]: urlParse: URL too large (4738 bytes)
 squid[735]: urlParse: URL too large (4470 bytes)
 squid[735]: urlParse: URL too large (4765 bytes)
 ...
 
 I understand that Squid is configured at compile time to cut off URLs
 larger than 4096 bytes, as defined by MAX_URL in src/defines.h, and
that
 changing this has not been tested.  Nevertheless, since I am expecting
 very long URLs (all requests are long query strings, responses are
 SOAP/XML), and the ones getting cutoff are not severely over the
limit,
 I would like to explore this change further.
 
 Has anyone redefined MAX_URL in their squid setups?   Do these 'URL
too
 large' requests get logged?  If not, is there a way I could get Squid
to
 tell me what the requests were so that I can verify that we have an
 operational need to increase the URL limit?

It has been tried at 8192 with no sign of trouble in Squid-3.
If your URL get much large than that, we really do need it checked up as

high as 128KB so feel free to build with larger values, just please 
report back how it goes (particularly if good news).
http://www.squid-cache.org/bugs/show_bug.cgi?id=2267

I'd suggest experimenting with squid 3.1.0.1 to see if its usable in 
your setup. The URL limits have been raised to 8KB already and diskless 
operation is much more polished and native.


As for logging the URI, most things in squid are dropped when they are 
found to overflow the buffers like that.  The details may be logged to 
cache.log when debug_options is set to the right section and level. I'm 
not sure right now which one is relevant to 2.6 though, there are a few 
available.
http://wiki.squid-cache.org/KnowledgeBase/DebugSections


Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.1


[squid-users] url length limit

2008-11-06 Thread Gregori Parker
Hi all - I am using an array of squid servers to accelerate dynamic
content, running 2.6.22 and handling a daily average of about 400
req/sec across the cluster.  We operate diskless and enjoy a great hit
rate (80%) on very short-lived content.

About 50+ times per day, the following appears in my cache.log:

squid[735]: urlParse: URL too large (4738 bytes)
squid[735]: urlParse: URL too large (4470 bytes)
squid[735]: urlParse: URL too large (4765 bytes)
...

I understand that Squid is configured at compile time to cut off URLs
larger than 4096 bytes, as defined by MAX_URL in src/defines.h, and that
changing this has not been tested.  Nevertheless, since I am expecting
very long URLs (all requests are long query strings, responses are
SOAP/XML), and the ones getting cutoff are not severely over the limit,
I would like to explore this change further.

Has anyone redefined MAX_URL in their squid setups?  Do these 'URL too
large' requests get logged?  If not, is there a way I could get Squid to
tell me what the requests were so that I can verify that we have an
operational need to increase the URL limit?

Thanks in advance


[squid-users] querystrings

2006-05-03 Thread Gregori Parker
I am using squid in http-acceleration mode, and it appears that squid
considers /whatever.swf?query1 and /whatever.swf?query2 two separate
items in terms of caching.

Is there any way to get squid to understand that these are the same
static file and don't need to be cached over and over?

One major caveat, I need currently have the query strings logged and I
need them to remain as suchI just want squid to ignore the query
string when caching items.

Thanks in advance




RE: [squid-users] querystrings

2006-05-03 Thread Gregori Parker

One of those client requirements for billing, etc.

I used a simple Perl script to strip those query strings, and it seems
to be working quite well.  Thanks!


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 3:08 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] querystrings

ons 2006-05-03 klockan 12:04 -0700 skrev Gregori Parker:
 I am using squid in http-acceleration mode, and it appears that squid
 considers /whatever.swf?query1 and /whatever.swf?query2 two separate
 items in terms of caching.

Yes, and it must according to the RFC.

So far I have not been given any good explanation why shockwave files
should have this random garbage at the end of the URL. It serves no
meaningful purpose in therms of HTTP except to confuse caches.

 Is there any way to get squid to understand that these are the same
 static file and don't need to be cached over and over?

You can use a redirector to strip out the garbage from the shockwave
links, provided you know they link to static content..

 One major caveat, I need currently have the query strings logged and I
 need them to remain as suchI just want squid to ignore the query
 string when caching items.

Then the redirector is what you are looking for. Squid caches on the URL
after the redirector, but logs the URL as it was sent to Squid.

As you say you need this garbage data, could you please elaborate a bit
on why this data is needed in the links in the first place?  I suppose
it's some kind of hit metering?

Regards
Henrik



[squid-users] download rates w/ squid

2006-05-03 Thread Gregori Parker
Once again, I'm using squid as an http-accelerator...

Squid seems to be capping rate of transfer to around 80 KB/s...I'm doing this 
transfer locally from one squid box to another via wget.  If I do this same 
transfer using scp, I get around 90 MB/s sustained.  I'm also noticing drastic 
fluctuations in speed when downloading from squid...every once in awhile it 
will jump up to 300 KB/s, but then fall back down to around 80 KB/s again.

I have eliminated the network, hardware and server configuration as potential 
problems -- I'm 99.8% sure it's squid.

Does anyone have any ideas as to why squid is having trouble filling the pipe?

Here's my trimmed down conf file..
--

http_port 80
icp_port 0
# no cache_peer entries
cache_mem 256 MB
cache_swap_low 90
cache_swap_high 98
maximum_object_size 256 MB
maximum_object_size_in_memory 1024 KB
cache_replacement_policy lru
memory_replacement_policy lru
cache_dir aufs /cache0/c0 40960 16 256
cache_dir aufs /cache0/c1 40960 16 256
# ...etc, 12 in total
cache_access_log /usr/local/squid/var/logs/access.log
cache_log /usr/local/squid/var/logs/cache.log
cache_store_log none
emulate_httpd_log on
pid_filename /var/run/squid.pid
debug_options ALL,1

redirect_program /usr/local/squid/redir.pl
redirect_children 20
redirect_rewrites_host_header off
refresh_pattern . 0 0% 4320
half_closed_clients off
shutdown_lifetime 4 seconds

# ACCESS CONTROLS (simplified)
acl all src 0.0.0.0/0.0.0.0
acl origins dst xx.xxx.xxx.x/255.255.255.192
acl acceleratedPort port 80
http_access allow all
http_access allow origins acceleratedPort
http_reply_access allow all

httpd_accel_port 80
httpd_accel_host my.accelerated.hostname.com
httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header off

logfile_rotate 0
log_icp_queries off
icp_hit_stale on
client_db off
snmp_port 161
acl SNMPPasswd snmp_community nottherealstring
acl SNMPClient1 src xxx.xxx.xxx.xx/255.255.255.255
acl SNMPClient2 src xxx.xxx.xxx.xx/255.255.255.255
snmp_access allow SNMPClient1 SNMPPasswd
snmp_access allow SNMPClient2 SNMPPasswd
snmp_access deny all
uri_whitespace allow
strip_query_terms off
relaxed_header_parser warn




RE: [squid-users] download rates w/ squid

2006-05-03 Thread Gregori Parker

Curiously, I found drastic improvement when I made the following change:

httpd_accel_with_proxy on

Obviously this requires tighter acl's for proper security.

Anyone know why this made such a huge difference???


-Original Message-
From: Dan Thomson [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 6:03 PM
To: Gregori Parker
Subject: Re: [squid-users] download rates w/ squid

I'm curious about this one as well.

Oddly enough, I seem to get a quick response until an object is
cached. Once cached, the object is transferred at a greatly reduced
speed.

My setup is pretty similar but it's on a 64 bit machine with a lot of
RAM and tonnes of hard drive space (for caching)

On 5/3/06, Gregori Parker [EMAIL PROTECTED] wrote:
 Once again, I'm using squid as an http-accelerator...

 Squid seems to be capping rate of transfer to around 80 KB/s...I'm
doing this transfer locally from one squid box to another via wget. If I
do this same transfer using scp, I get around 90 MB/s sustained. I'm
also noticing drastic fluctuations in speed when downloading from
squid...every once in awhile it will jump up to 300 KB/s, but then fall
back down to around 80 KB/s again.

 I have eliminated the network, hardware and server configuration as
potential problems -- I'm 99.8% sure it's squid.

 Does anyone have any ideas as to why squid is having trouble filling
the pipe?

 Here's my trimmed down conf file..
 --

 http_port 80
 icp_port 0
 # no cache_peer entries
 cache_mem 256 MB
 cache_swap_low 90
 cache_swap_high 98
 maximum_object_size 256 MB
 maximum_object_size_in_memory 1024 KB
 cache_replacement_policy lru
 memory_replacement_policy lru
 cache_dir aufs /cache0/c0 40960 16 256
 cache_dir aufs /cache0/c1 40960 16 256
 # ...etc, 12 in total
 cache_access_log /usr/local/squid/var/logs/access.log
 cache_log /usr/local/squid/var/logs/cache.log
 cache_store_log none
 emulate_httpd_log on
 pid_filename /var/run/squid.pid
 debug_options ALL,1

 redirect_program /usr/local/squid/redir.pl
 redirect_children 20
 redirect_rewrites_host_header off
 refresh_pattern . 0 0% 4320
 half_closed_clients off
 shutdown_lifetime 4 seconds

 # ACCESS CONTROLS (simplified)
 acl all src 0.0.0.0/0.0.0.0
 acl origins dst xx.xxx.xxx.x/255.255.255.192
 acl acceleratedPort port 80
 http_access allow all
 http_access allow origins acceleratedPort
 http_reply_access allow all

 httpd_accel_port 80
 httpd_accel_host my.accelerated.hostname.com
 httpd_accel_single_host on
 httpd_accel_with_proxy off
 httpd_accel_uses_host_header off

 logfile_rotate 0
 log_icp_queries off
 icp_hit_stale on
 client_db off
 snmp_port 161
 acl SNMPPasswd snmp_community nottherealstring
 acl SNMPClient1 src xxx.xxx.xxx.xx/255.255.255.255
 acl SNMPClient2 src xxx.xxx.xxx.xx/255.255.255.255
 snmp_access allow SNMPClient1 SNMPPasswd
 snmp_access allow SNMPClient2 SNMPPasswd
 snmp_access deny all
 uri_whitespace allow
 strip_query_terms off
 relaxed_header_parser warn






RE: [squid-users] hardware to load balance squid proxies?

2006-04-17 Thread Gregori Parker

The Radwares are okay for balancing http traffic, however they are
terrible when it comes to dealing with many other types of traffic
(especially media streaming).  This is because instead of examining
metrics and determining the optimal destination up-front, it follows a
process of trial and error to find you the best destination (passive
balancing as far as I'm concerned).  The reason this works for http is
because you don't only get one shot at getting it right the first time
with it.  Radwares tend to also sticky clients, resulting in unwanted
side-effects.  Also, if you need balance globally, you might be unhappy
with the Radware proximity selection.

The Cisco CSS is a great local load balancer, and coupled with a GSSM
using dns-boomerang, makes an unbeatable global load balancing platform.

IMHO / in my experience / your mileage may vary / etc.


-Original Message-
From: Kevin [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 17, 2006 1:41 PM
To: Adrian
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] hardware to load balance squid proxies?

On 4/17/06, Adrian [EMAIL PROTECTED] wrote:
 I'm looking for a hardware solution to load balance a cluster of
 squid proxies..  I'd love to hear from anyone who has experience
 with this type of thing.

We are a satisfied customer of Radware.

While we are using a different product, Radware has their
CSD (Cache Server Director) product for load-balancing caching proxies.


 I'm looking at the Cisco LocalDirector - are there other good
 options around?

CSS 11500 offers features above and beyond the LocalDirector.

Kevin



RE: [squid-users] Rotating Logs

2006-04-05 Thread Gregori Parker
I recommend setting log_rotate to 0 and having a perl or shell script 
crontabbed to do the actual rotation.

For example...

# Shell script for rotating squid logfiles
#  - moves access.log and renames it access-$year$month$day$hour.log
#  - run this every hour

currentdate=$(date +%y%m%d%H)
logfile=access-$currentdate.log

mv /usr/squid/log/access.log /var/log_storage/$logfile

/usr/local/squid/sbin/squid -k rotate

 

-Original Message-
From: Jakob Curdes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 05, 2006 12:11 PM
To: Michael Coburn
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Rotating Logs


I issue the following command
 
/usr/sbin/squid -k rotate
 
and nothing seems to happen.  I have read in the docs that it should
change the log files but nothing seems to happen in /var/log/squid
 
Am I missing something?
  

More interesting than the compile options are the settings in the config 
file squid.conf. According to the compilation options you should find it 
in /etc, but beware : there might be several versions of you system. 
Make sure you are looking at the right one.
Look at the configuration variable squid_rotate. Here is the excerpt of 
the explanation in the conf file :

#  TAG: logfile_rotate
#   Specifies the number of logfile rotations to make when you
#   type 'squid -k rotate'.  The default is 10, which will rotate
#   with extensions 0 through 9.  Setting logfile_rotate to 0 will
#   disable the rotation, but the logfiles are still closed and
#   re-opened.  This will enable you to rename the logfiles
#   yourself just before sending the rotate signal.

I suppose this is set to 0 so you see no rotation.

Yours,
Jakob Curdes




RE: [squid-users] WARNING - Queue congestion

2006-03-27 Thread Gregori Parker
I'm using aufs and I see these same messages whenever I do a complete 
restart of squid services...they don't seem to really impact usability from 
what I've seen, and they go away after about 5-10 minutes or so...I figure it's 
just squid catching up to itself after a fresh start.  If you're seeing these 
messages throughout normal operation, then you should look into making some 
configuration or hardware changes like Mike recommended.
 

-Original Message-
From: Mike Solomon [mailto:[EMAIL PROTECTED] 
Sent: Saturday, March 25, 2006 9:35 PM
To: pak kumis
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] WARNING - Queue congestion

This relates to the number of simultaneous requests squid is handling  
-- I'm assuming you are using AUFS.

Basically, the IO threads are not processing fast enough and the io  
request queue is getting long. Fast enough is a metric defined by  
squid (it's in store_dir.c file I think, and there isn't much  
commentary on where the heuristics originate from).

You can potentially alleviate this by increasing the number of IO  
threads at compile time - but it depends on how much disk activity  
you are seeing. A quick look at iostat (or sar data) correlated to  
the queue congestion messages should be enough to tell.

If the disks aren't saturated, I'd say you could increase the number  
of threads to at least 48 (depending on hardware), but that's not  
much more than the 36 you seem to have already. I don't know the  
maximum number of threads you can really throw at the problem, but  
you can obviously experiment.

If your disks are overloaded, there won't be much you can do (aside  
from adding more spindles, or more RAM). File system and kernel io  
tuning may yield small gains, but it won't solve the core problem.

-Mike



On Mar 24, 2006, at 12:47 PM, pak kumis wrote:

 hi,

 i got this message in my log.

 squidaio_queue_request: WARNING - Queue congestion

 my sistem use 4 hdd sata for the cache directory.

 when i type pstree i found my squid proses

  |-squid---squid-+-squid---36*[squid]
  |   |-24*[squid_redirect]
  |   `-unlinkd





RE: [squid-users] rotate bug?

2006-03-27 Thread Gregori Parker
Lol, I'm not the one with the problem -- I was just telling the guy who asked 
how I did it.  If you go into squid.conf and set log_rotate to 0 like I did, 
then what you just said doesn't apply (sorry - I forgot that detail).  My log 
rotation is working perfectly :)

 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 26, 2006 10:28 PM
To: Gregori Parker; squid-users@squid-cache.org
Subject: AW: [squid-users] rotate bug?

I just use a shell script to rotate logs:

#!/bin/bash
# Shell script for rotating squid logfiles
timestamp=$(date +%y%m%d%H)
filename=ex$timestamp.log
mv /usr/local/squid/var/logs/access.log /var/squidlogs/$filename

 Oups, you steel access.log

/usr/local/squid/sbin/squid -k rotate

 Squid wants to rename access.log to access.log.0
 but you have stolen it before!



Please try:

/usr/local/squid/sbin/squid -k rotate
mv /usr/local/squid/var/logs/access.log.0 /var/squidlogs/$filename
   ===

Werner Rost



RE: [squid-users] squid performance epoll. 350req/sec 100% cpu

2006-03-27 Thread Gregori Parker
In noticed that the epoll patch wants to patch source files in a
directory called 'squid'), so make sure you mv SQUID-2.5STABLE12/ to
squid/ after you extract from the tar.gz

# patch -p0  epoll-2.5.patch

To bootstrap, simply cd into squid/ and ./bootstrap.sh

When I did it initially, bootstrap wasn't working for me, so I had to
downgrade my automake and autoconf to the right versions, at least for
STABLE12 which is what I was building at the time.  Here are links for
the versions that work:
http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz 
http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz  -- you'll
need to build and make them first before bootstrapping again.  When you
run bootsrap, you can ignore warnings but not errors!

After you bootstrap without errors, you should be ready to run any
preconfiguration commands you need and then configure.



-Original Message-
From: Michal Mihalik [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 27, 2006 2:05 PM
To: 'Mike Solomon'
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] squid performance epoll. 350req/sec 100% cpu

Hi
 ok I learned the strace and it does call select (99% of time)
 
 looks like my epoll is not active :-(( 
 and I did found that I didn't compile it as I should.

But now I am unable to compile because of this errors.
 I don't have automake 1.5 (only 1.4 1.6 1.7 1.9)
And autoconf too
I do have debian stable... And added to it apt sources - testing(to
get
latest squid)

Can someone help tu run this?
I don't understand this whole think of automake autoconf


# this one later doesn't compile cleanly
# bootstrap.sh
3
WARNING: Cannot find automake version 1.5
Trying automake (GNU automake) 1.9.6
WARNING: Cannot find autoconf version 2.13
Trying autoconf (GNU Autoconf) 2.59
acinclude.m4:10: warning: underquoted definition of
AC_CHECK_SIZEOF_SYSTYPE
  run info '(automake)Extending aclocal'
  or see
http://sources.redhat.com/automake/automake.html#Extending-aclocal
acinclude.m4:49: warning: underquoted definition of AC_CHECK_SYSTYPE
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a
type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
autoheader: WARNING: Using auxiliary files such as `acconfig.h',
`config.h.bot'
autoheader: WARNING: and `config.h.top', to define templates for
`config.h.in'
autoheader: WARNING: is deprecated and discouraged.
autoheader:
autoheader: WARNING: Using the third argument of `AC_DEFINE' and
autoheader: WARNING: `AC_DEFINE_UNQUOTED' allows to define a template
without
autoheader: WARNING: `acconfig.h':
autoheader:
autoheader: WARNING:   AC_DEFINE([NEED_FUNC_MAIN], 1,
autoheader: [Define if a function `main' is needed.])
autoheader:
autoheader: WARNING: More sophisticated templates can also be produced,
see
the
autoheader: WARNING: documentation.
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a
type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a
type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a
type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
Autotool bootstrapping complete.






Thanks
 Mike
 

 -Original Message-
 From: Mike Solomon [mailto:[EMAIL PROTECTED] 
 Sent: Monday, March 27, 2006 8:28 PM
 To: Michal Mihalik
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid performance epoll. 
 350req/sec 100% cpu
 
 I would bet that an strace on the master pid would reveal that you  
 are calling poll, not epoll.
 
 There are several postings on the list about applying the epoll  
 patch, but IIRC, you need to explicitly --disable-poll --disable- 
 select --enable-epoll for it to work.
 
 -Mike
 
 On Mar 27, 2006, at 9:09 AM, Michal Mihalik wrote:
 
  Date: Mon, 27 Mar 2006 17:31:36 +0200
  From: Michal Mihalik [EMAIL PROTECTED]
  To: squid-users@squid-cache.org
  Subject: [squid-users] squid performance epoll. 
 350req/sec 100% cpu
 
  Hello.
   I am tring to optimize squid for best possible performance.
   it is in production and it's doing more than 350req/sec.
  At peaks upto
  500req/sec.
 
   My problem is only one.  100% cpu.  :-)
 
   I tried to update my debian to 2.6.16 and recompiled squid:
 
  Squid Cache: Version 2.5.STABLE12
  configure options:  --prefix=/usr --exec_prefix=/usr
  --bindir=/usr/sbin
  --sbindir=/usr/sbin --libexecdir=/usr/lib/squid
  --sysconfdir=/etc/squid
  --localstatedir=/var/spool/squid --datadir=/usr/share/squid
  --enable-async-io --with-pthreads
  --enable-storeio=ufs,aufs,diskd,null
  --enable-linux-netfilter --enable-arp-acl
  --enable-removal-policies=lru,heap
  --enable-snmp --enable-delay-pools 

RE: [squid-users] Regarding ACL

2006-03-27 Thread Gregori Parker
It's all there in the FAQ...RTFM a little harder.

http://www.squid-cache.org/Doc/FAQ/FAQ-10.html#ss10.17

http://www.squid-cache.org/Doc/FAQ/FAQ-10.html#ss10.26

..in fact, just read that whole page in preparation for your endeavor.

And ditch the annoying confidentiality sig when emailing a user list ffs.

 

-Original Message-
From: Jagdish [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 27, 2006 10:08 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Regarding ACL

 Hello,
 
 I have searched the FAQ Database and was not able to find an ACL which 
 will meet my requirement. My requirement is as follows :
 
 Lets say we have two groups of users(GP A and GP B). These could be 
 clients, IP Address or users. GP A should be able to browse sites like 
 yahoo ( List of sites in a file called allowed_sites) even during the 
 office timings. GP B should not be able browse those sites during the 
 office timings.
 
 How do I implement this ? Any help would be appreciated.
 
 Thanks in advance
 
 Regards
 
 Jagdish
 

##
The information transmitted is intended for the person or entity to which it is 
addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination, copying or other use of, 
or taking any action in reliance upon, this information by
persons or entities other than the intended recipient is prohibited. If you 
have received this in error, please contact the sender and delete
the material from your system. Accord Software  Systems Pvt. Ltd. (ACCORD) is 
not responsible for any changes made to the material other
than those made by ACCORD or for the effect of the changes on the meaning of 
the material.
##



RE: [squid-users] rotate bug?

2006-03-25 Thread Gregori Parker
I just use a shell script to rotate logs:

#!/bin/bash
# Shell script for rotating squid logfiles
timestamp=$(date +%y%m%d%H)
filename=ex$timestamp.log
mv /usr/local/squid/var/logs/access.log /var/squidlogs/$filename
/usr/local/squid/sbin/squid -k rotate


...and then run it regularly by putting something in crontab:

# rotate logs once an hour
0 * * * * /usr/sbin/log_rotate.sh

 

-Original Message-
From: Mark Elsen [mailto:[EMAIL PROTECTED] 
Sent: Saturday, March 25, 2006 5:08 AM
To: lawrence wang
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] rotate bug?

 hi, i was testing out squid -k rotate on squid-2.5STABLE12, and i
 notice that cache.log and store.log rotate ok (*.0 files are created),
 but access.log doesn't; furthermore, if i restart the server,
 access.log is emptied, so i lose my old logs.

 if i rename the file after running rotate, it will keep writing to
 that one and then write to a new file after restart.

 my access.log is named squid_access.log and it's in a non-standard
 location; maybe that's why?


It can perfectly be in a none standard location , if so configured and or
you are using log pointer directives in squid.conf.

Is there anything in cache.log , when yoy try to rotate ?
Look both in the end of the before-rotated cache.log (tail-ed) and in
in the beginning
of the new (head-ed) one.

M.



RE: [squid-users] recommendation on file system for squid

2006-03-17 Thread Gregori Parker
In my tests with Squid, I've found aufs on ext2 with the noatime option
to be the best in terms of raw performance.



-Original Message-
From: arabinda [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 17, 2006 3:41 AM
To: squid-users@squid-cache.org
Subject: [squid-users] recommendation on file system for squid

Hello,

Is there any specific recommendation on file system that aid the
performance
of squid?

Any suggestion?

Thanks and regards
Devel


-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.385 / Virus Database: 268.2.4/283 - Release Date: 3/16/2006
 




RE: [squid-users] Hardware requirements

2006-03-06 Thread Gregori Parker
That should be fine, however I would recommend a lot more diskspace for the 
cache.  Each of our servers are 3GHz Xeon, 2GB RAM and 1TB of diskspace - they 
each push 130mbps of flow without any problems.
 

-Original Message-
From: Ilja Marchew [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 06, 2006 4:01 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Hardware requirements

We have 2-12 mbits of traffic flow.

Is scsi320 72MB + RAM 1GB + Xeon 2.0GHz server enough to proxificate
it transparently?  Or we need more processor/RAM?  Or we need to
balance flow between 2-3 servers (because of non-SMP architecture of
squid)?

Thanks.



RE: [squid-users] Hardware requirements

2006-03-06 Thread Gregori Parker
Sure, we have 3 of these clustered in an all-sibling reverse-proxy setup.

2 x 3.0 GHz Xeon 64-bit
2 GB DDR-400 RAM
1 x 80 GB WD HDD
3 x 400 GB WD HDD
..running Fedora Core 4 x86_64

Squid is 2.5 STABLE12 with epoll and collapsed-forwarding patches

Configured with: --enable-async-io=32 --enable-snmp --enable-htcp 
--enable-underscores --enable-epoll

As you can see we use: htcp, aufs (with noatime options on the mounts) and snmp 
for monitoring via cacti.  Here are some items from squid.conf: 

http_port 80
icp_port 0
htcp_port 4827
cache_peer XX.XX.XXX.XXX sibling 80 4827 htcp proxy-only
cache_peer XX.XX.XXX.XXX sibling 80 4827 htcp proxy-only
cache_mem 256 MB
cache_swap_low 90
cache_swap_high 98
maximum_object_size 256 MB
maximum_object_size_in_memory 1024 KB
cache_replacement_policy lru
memory_replacement_policy lru
cache_dir aufs /cache0/c0 40960 16 256
cache_dir aufs /cache0/c1 40960 16 256
cache_dir aufs /cache0/c2 40960 16 256
cache_dir aufs /cache0/c3 40960 16 256
cache_dir aufs /cache0/c4 40960 16 256
cache_dir aufs /cache0/c5 40960 16 256
cache_dir aufs /cache1/c0 40960 16 256
cache_dir aufs /cache1/c1 40960 16 256
cache_dir aufs /cache1/c2 40960 16 256
cache_dir aufs /cache1/c3 40960 16 256
cache_dir aufs /cache1/c4 40960 16 256
cache_dir aufs /cache1/c5 40960 16 256 
refresh_pattern \.xml 0 0% 4320
refresh_pattern . 0 20% 10080 ignore-reload
httpd_accel_port 80
httpd_accel_host xx.x.xx.com
httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header off
log_icp_queries off
icp_hit_stale on
client_db off
emulate_httpd_log on
uri_whitespace allow
strip_query_terms off
relaxed_header_parser warn
...etc...




-Original Message-
From: Shoebottom, Bryan [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 06, 2006 10:25 AM
To: Gregori Parker; squid-users@squid-cache.org
Subject: RE: [squid-users] Hardware requirements

Gregori,

Can you give me the details on your entire setup?  I have a 3.4GHz Xeon with 
2GB memory and 100GB cache and with 200+ req/s my CPU is pinned.  I have a 
transparent cache with WCCP and don't use any ACLs except for SNMP.

Thanks,
 Bryan
 

-Original Message-
From: Gregori Parker [mailto:[EMAIL PROTECTED] 
Sent: March 6, 2006 1:16 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Hardware requirements

That should be fine, however I would recommend a lot more diskspace for the 
cache.  Each of our servers are 3GHz Xeon, 2GB RAM and 1TB of diskspace - they 
each push 130mbps of flow without any problems.
 

-Original Message-
From: Ilja Marchew [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 06, 2006 4:01 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Hardware requirements

We have 2-12 mbits of traffic flow.

Is scsi320 72MB + RAM 1GB + Xeon 2.0GHz server enough to proxificate
it transparently?  Or we need more processor/RAM?  Or we need to
balance flow between 2-3 servers (because of non-SMP architecture of
squid)?

Thanks.

.




[squid-users] RE: performance tuning - http-accel timeouts

2006-02-27 Thread Gregori Parker

Just wondering if anyone has any tips on tuning the timeout settings for
Squid in http-accel mode.

I have each of my squid servers pushing over 130mbps right now, and I
want to push capacity as far as possible on these boxes.

Everything related to timeouts are default right now.  Thanks!




[squid-users] cachemgr.cgi

2006-02-24 Thread Gregori Parker
I've followed all the FAQs and searched all the various threads, but I
cant get cachemgr working.  I was happy with SNMP for a while, but now I
realize I need to examine some metrics that are only available in
cachemgr.

I don't have apache on the squid servers, so I've been trying to move
the cachemgr.cgi file to another server that does have apache.  I set up
all the aliases and chmoded the thing to 755, but I still get 500
errors.

Error message:
Premature end of script headers: cachemgr.cgi

Thanks in advance for any advice!




[squid-users] squid and swf files

2006-02-24 Thread Gregori Parker
I've been getting reports of problems with squid and swf files.  After
doing some testing, I found that a link like
http://my.squid.cache/directory/something.swf would work fine in Mozilla
but not in Internet Explorer - IE says something about downloading in
the status bar and then hangs for a long while.  I researched this a
bit, and found reports that this issue can be fixed on Apache by
sticking AddType application/x-shockwave-flash .swf in the conf file.

I noticed that squid/etc/mime.conf as the following line:

\.swf$ application/x-shockwave-flash anthony-unknown.gif - image
+download

But then I read somewhere else that mime.conf only appies to ftp/gopher
and other non-http traffic...so,

Is there something I can do to make squid handle .swf files consistently
between browsers?





RE: [squid-users] squid and swf files

2006-02-24 Thread Gregori Parker
I had the mime_table commented out, so I uncommented it, pointed it to the 
correct file, and replaced in mime.conf +download with +view...it seems to 
have fixed the problem for the time being.

Also, please disregard all my other messages (epoll, cachemgr, etc) - all is 
well now.  Well, except for peering...I don't think my all-sibling setup is 
doing a damn thing.  I'm going to try eliminating peering and then leave this 
cluster alone for awhile.

Peace -- Gregori 
 


-Original Message-
From: Mark Elsen [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 24, 2006 3:40 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid and swf files

 I've been getting reports of problems with squid and swf files.  After
 doing some testing, I found that a link like
 http://my.squid.cache/directory/something.swf would work fine in Mozilla
 but not in Internet Explorer - IE says something about downloading in
 the status bar and then hangs for a long while.  I researched this a
 bit, and found reports that this issue can be fixed on Apache by
 sticking AddType application/x-shockwave-flash .swf in the conf file.

 I noticed that squid/etc/mime.conf as the following line:

 \.swf$ application/x-shockwave-flash anthony-unknown.gif - image
 +download

 But then I read somewhere else that mime.conf only appies to ftp/gopher
 and other non-http traffic...so,



   Where is somewhere ?


  Since my name is nobody :
 --

 From squid.conf.default :

#  TAG: mime_table
#   Pathname to Squid's MIME table. You shouldn't need to change
#   this, but the default file contains examples and formatting
#   information if you do.
#
#Default:
# mime_table /etc/squid/mime.conf


So it's highly unlikely that SQUID does not use this info
for 'http' operations.
Are you using the default setting for this value , and or
is the specified file readable by squid_effective_user ?

M.



RE: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Gregori Parker

My /etc/init.d/squid ...I'm doing this already

#!/bin/bash
echo 1024 32768  /proc/sys/net/ipv4/ip_local_port_range
echo 1024  /proc/sys/net/ipv4/tcp_max_syn_backlog
SQUID=/usr/local/squid/sbin/squid

# increase file descriptor limits
echo 8192  /proc/sys/fs/file-max
ulimit -HSn 8192

case $1 in

start)
   $SQUID -s
   echo 'Squid started'
   ;;

stop)
   $SQUID -k shutdown
   echo 'Squid stopped'
   ;;

esac



From: kabindra shrestha [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 22, 2006 7:28 PM
To: Gregori Parker
Subject: Re: [squid-users] FILE DESCRIPTORS

u ve to run the same command ulimit -HSn 8192 before starting the squid. it 
is working fine on my server.

---

I've done everything I have read about to increase file descriptors on 
my caching box, and now I just rebuilt a fresh clean squid.  Before I
ran configure, I did ulimit -HSn 8192, and I noticed that while
configuring it said Checking File Descriptors... 8192.  I even
double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
everything was good, even ran a ulimit -n right before starting squid
and saw 8192!  So I start her up, and in cache.log I see...

2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
x86_64-unknown-linux-gnu...
2006/02/22 19:05:08| Process ID 3657
2006/02/22 19:05:08| With 1024 file descriptors available

Arggghh.

Can anyone help me out?  This is on Fedora Core 4 64-bit

Thanks, sigh - Gregori




[squid-users] post epoll...

2006-02-23 Thread Gregori Parker
So, I rebuilt squid with the epoll patch, hoping to get cpu usage down
some...now I'm seeing this a LOT in the cache.log (more than once per
minute)

storeClientCopy3: http://xxx..com/xxx/abc.xyz - clearing
ENTRY_DEFER_READ

Should I 86 the patch?  By 86 I mean get rid of ;/~




RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Interesting...

Here's what I did (downloaded patch from here
http://devel.squid-cache.org/cgi-bin/diff2/epoll-2_5.patch?s2_5):

# tar zxvf squid.tar.gz
# mv squid-2.5STABLE12/ squid
# patch -p0  epoll-2_5.patch
# cd squid
# ulimit -HSn 8192
# ./configure --prefix=/usr/local/squid --enable-async-io
--enable-snmp --enable-htcp --enable-underscores --enable-epoll
# make
(etc..)

Can you help me understand what I missed?  I've never worked with CVS or
bootsrap.sh, so please be specific :)   


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 10:04 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

 -Original Message-
 From: Gregori Parker [mailto:[EMAIL PROTECTED]
 Sent: Thursday, February 23, 2006 8:49 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] post epoll...
 
 
 So, I rebuilt squid with the epoll patch, hoping to get cpu usage down
 some...now I'm seeing this a LOT in the cache.log (more than once per
 minute)
 
 storeClientCopy3: http://xxx..com/xxx/abc.xyz - clearing
 ENTRY_DEFER_READ
 
 Should I 86 the patch?  By 86 I mean get rid of ;/~
 
 


That problem seems to have surfaced in May of 2005
(http://www.google.com/search?hl=enlr=q=site%3Awww.squid-cache.org%2Fm
ail-archive%2Fsquid-users%2F+ENTRY_DEFER_READbtnG=Search), and was
(apparently) fixed at that time.

How did you go about patching the Squid source?
Where you aware that you have to run bootstrap.sh after patching (and
that doing so requires specific versions of autoconf and automake)?
Did you apply the patch to the most recent Squid source (or download the
CVS version with the epoll tag)?

FWIW, I'm running epoll on Squid2.5 STABLE11 without problem.

Chris



RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Ok, I need more detail - it doesn't make a lot of sense to me.  I ran 
./bootstrap.sh where you said, and it told me this:

Trying autoconf (GNU Autoconf) 2.59
autoheader: WARNING: Using auxiliary files such as `acconfig.h', `config.h.bot'
autoheader: WARNING: and `config.h.top', to define templates for `config.h.in'
autoheader: WARNING: is deprecated and discouraged.
autoheader:
autoheader: WARNING: Using the third argument of `AC_DEFINE' and
autoheader: WARNING: `AC_DEFINE_UNQUOTED' allows to define a template without
autoheader: WARNING: `acconfig.h':
autoheader:
autoheader: WARNING:   AC_DEFINE([NEED_FUNC_MAIN], 1,
autoheader: [Define if a function `main' is needed.])
autoheader:
autoheader: WARNING: More sophisticated templates can also be produced, see the
autoheader: WARNING: documentation.
configure.in:13: warning: do not use m4_patsubst: use patsubst or m4_bpatsubst
aclocal.m4:628: AM_CONFIG_HEADER is expanded from...
configure.in:13: the top level
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
configure.in:2552: warning: do not use m4_regexp: use regexp or m4_bregexp
aclocal.m4:641: _AM_DIRNAME is expanded from...
configure.in:2552: the top level
configure.in:13: warning: do not use m4_patsubst: use patsubst or m4_bpatsubst
aclocal.m4:628: AM_CONFIG_HEADER is expanded from...
configure.in:13: the top level
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
configure.in:2552: warning: do not use m4_regexp: use regexp or m4_bregexp
aclocal.m4:641: _AM_DIRNAME is expanded from...
configure.in:2552: the top level
configure.in:2365: error: do not use LIBOBJS directly, use AC_LIBOBJ (see 
section `AC_LIBOBJ vs LIBOBJS'
  If this token and others are legitimate, please use m4_pattern_allow.
  See the Autoconf documentation.
autoconf failed
Autotool bootstrapping failed. You will need to investigate and correct
before you can develop on this source tree

Obviously I need some newer files, but I don't know where to get them or where 
to put them once I got them.  PLEASE HELP :D


 
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 11:22 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

 -Original Message-
 From: Gregori Parker [mailto:[EMAIL PROTECTED]
 Sent: Thursday, February 23, 2006 10:03 AM
 To: squid-users@squid-cache.org
 Subject: RE: [squid-users] post epoll...
 
 
 Interesting...
 
 Here's what I did (downloaded patch from here
 http://devel.squid-cache.org/cgi-bin/diff2/epoll-2_5.patch?s2_5):
 
   # tar zxvf squid.tar.gz
   # mv squid-2.5STABLE12/ squid
   # patch -p0  epoll-2_5.patch
   # cd squid

./bootstrap.sh --- This will complain if you don't have the preferred version 
of autoconf and automake.

   # ulimit -HSn 8192
   # ./configure --prefix=/usr/local/squid --enable-async-io
 --enable-snmp --enable-htcp --enable-underscores --enable-epoll
   # make
   (etc..)
 
 Can you help me understand what I missed?  I've never worked 
 with CVS or
 bootsrap.sh, so please be specific :) 
 
 

To the best of my knowledge, every one of the patch files on 
devel.squid-cache.org have a bootstrap.sh that needs to be run after the patch 
is applied.

Chris



RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Thank you very much Chris - I think that did the trick.

When I run bootstrap again, it seems successful...but can I ignore these 
warnings?

configure.in:1392: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1493: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1494: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1495: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1496: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1497: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1498: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1499: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1500: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1501: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1502: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1904: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1933: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1957: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1392: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1488: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1489: warning: AC_TRY_RUN called without default to allow cross 
compiling
(etc...)

___
 
Gregori Parker  *  Network Administrator
___
 
 Phone 206.404.7916  *  Fax 206.404.7901
[EMAIL PROTECTED]  
 
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 12:17 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

 -Original Message-
 From: Gregori Parker [mailto:[EMAIL PROTECTED]
 Sent: Thursday, February 23, 2006 10:53 AM
 To: squid-users@squid-cache.org
 Subject: RE: [squid-users] post epoll...
 
 
 Ok, I need more detail - it doesn't make a lot of sense to 
 me.  I ran ./bootstrap.sh where you said, and it told me this:
 
 Trying autoconf (GNU Autoconf) 2.59

SNIP

 autoconf failed
 Autotool bootstrapping failed. You will need to investigate 
 and correct
 before you can develop on this source tree
 
 Obviously I need some newer files, but I don't know where to 
 get them or where to put them once I got them.  PLEASE HELP :D
 

Actually, you need older files.  :o)

As of today...

Grab http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz

Ungzip, untar, configure, make, and install.

Grab http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz

Ungzip, untar, etc. again.

Re-run bootstrap.sh

Chris



RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Thanks Chris - that did the trick

Also, thanks to Squidrunner Support Team - your advice resolved my file 
descriptor issue.

Bigups to all you guys, thanks!
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 12:17 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

 -Original Message-
 From: Gregori Parker [mailto:[EMAIL PROTECTED]
 Sent: Thursday, February 23, 2006 10:53 AM
 To: squid-users@squid-cache.org
 Subject: RE: [squid-users] post epoll...
 
 
 Ok, I need more detail - it doesn't make a lot of sense to 
 me.  I ran ./bootstrap.sh where you said, and it told me this:
 
 Trying autoconf (GNU Autoconf) 2.59

SNIP

 autoconf failed
 Autotool bootstrapping failed. You will need to investigate 
 and correct
 before you can develop on this source tree
 
 Obviously I need some newer files, but I don't know where to 
 get them or where to put them once I got them.  PLEASE HELP :D
 

Actually, you need older files.  :o)

As of today...

Grab http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz

Ungzip, untar, configure, make, and install.

Grab http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz

Ungzip, untar, etc. again.

Re-run bootstrap.sh

Chris



RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Well, everything is rebuilt, and my file descriptors are OK, but I'm still 
seeing the storeClientCopy3: http://whatever - clearing ENTRY_DEFER_READ

Any more ideas?  Or are these safely ignored?  Or are they BAD?!? 


-Original Message-
From: Gregori Parker 
Sent: Thursday, February 23, 2006 12:44 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

Thanks Chris - that did the trick

Also, thanks to Squidrunner Support Team - your advice resolved my file 
descriptor issue.

Bigups to all you guys, thanks!
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 12:17 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

 -Original Message-
 From: Gregori Parker [mailto:[EMAIL PROTECTED]
 Sent: Thursday, February 23, 2006 10:53 AM
 To: squid-users@squid-cache.org
 Subject: RE: [squid-users] post epoll...
 
 
 Ok, I need more detail - it doesn't make a lot of sense to 
 me.  I ran ./bootstrap.sh where you said, and it told me this:
 
 Trying autoconf (GNU Autoconf) 2.59

SNIP

 autoconf failed
 Autotool bootstrapping failed. You will need to investigate 
 and correct
 before you can develop on this source tree
 
 Obviously I need some newer files, but I don't know where to 
 get them or where to put them once I got them.  PLEASE HELP :D
 

Actually, you need older files.  :o)

As of today...

Grab http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz

Ungzip, untar, configure, make, and install.

Grab http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz

Ungzip, untar, etc. again.

Re-run bootstrap.sh

Chris




RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
FYI, I set half_closed_clients to off and that seemed to get rid of like 95% of 
those messages. 
 


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 3:03 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

tor 2006-02-23 klockan 14:26 -0800 skrev Gregori Parker:
 Well, everything is rebuilt, and my file descriptors are OK, but I'm still 
 seeing the storeClientCopy3: http://whatever - clearing ENTRY_DEFER_READ
 
 Any more ideas?  Or are these safely ignored?  Or are they BAD?!? 

Looks like just some debug output. It's probably safe to edit the debug
statement to use log level 2..

As have been stated earilier in this thread the epoll patch is works in
progress. No guarantees it won't fry your computer and eat your lunch.
It's always recommended to have some hard skin when trying out patches
from devel.squid-cache.org.

Regards
Henrik



RE: [squid-users] rebuilding question

2006-02-22 Thread Gregori Parker
By wiser, I mean: will squid just picked up where it left off with the cache as 
if nothing happened?  Or will items in the cache become alien to squid?

Not a big deal either way, I'll just try it and if I have to wipe the caches, 
so be it.
 

-Original Message-
From: Mark Elsen [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 22, 2006 12:56 AM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] rebuilding question

 :

 I'm preparing to rebuild squid on a few servers within a production
 cluster to apply the epoll patch and fix a FD issue.  Once everything is
 rebuilt (same configuration options), do I have to run squid -z
 initially?  Or, can squid reuse the existing cache directories after
 being rebuilt?

  You don't have to run 'squid -z'; mind you the epoll patch, is as I believe,
not ready for production use.
There has been a thread about this recently, check the archives.

 I guess my question is, if the config files don't change and the cache
 is still the same, will squid be the wiser?


  Define wiser ?

  M.



[squid-users] FILE DESCRIPTORS

2006-02-22 Thread Gregori Parker
Sorry to be pounding the list lately, but I'm about to lose it with
these file descriptors...

I've done everything I have read about to increase file descriptors on
my caching box, and now I just rebuilt a fresh clean squid.  Before I
ran configure, I did ulimit -HSn 8192, and I noticed that while
configuring it said Checking File Descriptors... 8192.  I even
double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
everything was good, even ran a ulimit -n right before starting squid
and saw 8192!  So I start her up, and in cache.log I see...

2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
x86_64-unknown-linux-gnu...
2006/02/22 19:05:08| Process ID 3657
2006/02/22 19:05:08| With 1024 file descriptors available

Arggghh.

Can anyone help me out?  This is on Fedora Core 4 64-bit

Thanks, sigh - Gregori



[squid-users] rebuilding question

2006-02-21 Thread Gregori Parker

I'm preparing to rebuild squid on a few servers within a production
cluster to apply the epoll patch and fix a FD issue.  Once everything is
rebuilt (same configuration options), do I have to run squid -z
initially?  Or, can squid reuse the existing cache directories after
being rebuilt?

I guess my question is, if the config files don't change and the cache
is still the same, will squid be the wiser?

Thanks :)




RE: [squid-users] An access analyzer that works with Squid

2006-02-15 Thread Gregori Parker
I've implemented AWStats to do enterprise statistics processing for a content 
delivery system with over 1000 virtual hosts...the trick was getting a separate 
awstats.conf file for each virtual host, and running awstats on each one.  I 
had to hack the perl scripts a bit, and write a sentry app that searches 
through directories and launches awstats on each conf file it finds.  
Coincidentally, my company made the move from a similar Webalizer 
implementation for the same reason: not supported for 3 years.

You can use a shared logfile location (the access.log coming from squid 
obviously), and setup the awstats.conf to disregard everything that doesn't 
match the relavent vhost that's being processed.  May not perform like a dream 
(depending on how many vhosts you're talking about), but it will work.  Look 
through awstats documentation, your solution should be in the construction of 
the awstats.conf files.

Btw, to answer some of your other questions: yes, your reasoning is correct, 
you'll want to use Squid's logs to get accurate stats because Apache's logs 
will just be a fraction of the actual hits.  You don't need the custom log 
format patch, awstats can handle both Squid format and the Native CLF that 
Squid can emulate (emulate_httpd_log in squid.conf).

If it's too complicated to get awstats working with a shared log, then grep 
them on the fly (as logs are rotated perhaps) into separate log files for each 
vhost.  Better yet, use perl to do it.  I love perl.  Hope that helps somewhat.

- Gregori


-Original Message-
From: Maciej Zięba [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 15, 2006 10:54 AM
To: squid-users@squid-cache.org
Subject: [squid-users] An access analyzer that works with Squid

Hi :)

I'm looking for a (log) analyzer that can give me the access/traffic
statistics of an Apache webserver that Squid is accelerating. More
precisely - I need seperate stats for each virtual host that runs on the
webserver.

I think it would be the best if I present the situation more closely... :)

I have an Apache webserver running on port 81 which has a couple of
vhosts (some of them are Zope instances, but I don't think that it
matters) and it is accelerated by Squid running on the same machine, on
port 80. As I've already said - I need statistics for each vhost and not
for all of them.

As I understand I cannot use the Apache main access_log and the logs of
vhosts because not all requests reach them (Squid caches and accelerates
contents that hasn't changed), so I'm left with Squid's access.log (all
traffic passes through Squid). Is my reasoning correct?

Anyhow, previously (before we used Squid) my company has used Webalizer
to parse the Apache's logs (main and vhosts') but it hasn't been
developed for over 3 years and as I've mentioned we can't use those logs
anymore...

I've come over AWStats and thought it would be a good choice.
Unfortunatelly all I can do is get stats of the entire Apache and not
the vhosts. That's because it's impossible for it to get virtual host's
name from Squid's access.log (neither in native, nor in common format).
I've found this patch that would solve my problem by enabling custom
logformat:

http://devel.squid-cache.org/customlog/

but I cannot use it and I cannot install the development Squid 3 - it's
company's semi-production machine and has to be stable :(

Is there some other way I can get AWStats running?

Or maybe you could recommend some other good tool for generating
statistics (HTML with things like graphs, most visited sites, etc.) from
squid's logs?

I'm sorry for the lenghty e-mail. I hope someone can give me pointers -
I'll be very grateful for any...

Umm... And please excuse my not-so-good English :|

Best regards,
Maciej Zieba



RE: [squid-users] Redirect

2006-02-15 Thread Gregori Parker
1) Use a redirector.  The Squid FAQ has an entire section on it.
http://www.squid-cache.org/Doc/FAQ/FAQ-15.html 
..I recommend Squirm: http://squirm.foote.com.au/ 

2) configure a pattern (this is squirm format)

regexi .*/whatever/pattern/ https://whatever.com/login.jsp

3) be more patient, people will help you if they can...but may choose not to if 
you reply to your own messages or sound too demanding.
 


-Original Message-
From: Fernando Rodriguez [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 15, 2006 4:14 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Redirect


Can some one point me on the right direction for doing this ?? or thell me
where I can find information regarding my requirements..




Hello,
 
How  can i catch a url that matches a pattern and redirect that pattern to a
login screen so it can be reprocesed.
 
Thanks
 
Fernando Rodriguez V.
m.net
 
 
 




RE: [squid-users] DENIED using httpd acceleration

2006-02-15 Thread Gregori Parker
Make sure you have the following in place!

acl all src 0.0.0.0/0.0.0.0
acl localhost dst 127.0.0.1/255.255.255.255
acl origin_SEA dst 63.251.167.0/255.255.255.192
acl origin_ATL dst 64.95.53.0/255.255.255.192
acl acceleratedPort port 80
http_access allow all
http_access allow localhost acceleratedPort
http_reply_access allow all


AND THESE!

httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header off


For clarity: are you getting a denied page from Squid or from Apache?


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 15, 2006 5:10 PM
To: squid-users@squid-cache.org
Subject: [squid-users] DENIED using httpd acceleration

hi,

I just configured my web server to use squid's feature using
httpd_acceleration, but incoming request to SQUID have been denied, i
even use 'http_access allow all' but no luck..

squid.conf

http_port 80
http_accel_port 80
http_accel_host 127.0.0.1


---httpd.conf

Listen 127.0.0.1:80

So i tried different approach since i am using vhosts

--squid.conf

http_port 80
http_accel_port 80
http_accel_host virtual

---httpd.conf

Listen 127.0.0.1:80

NameVirtualHost 127.0.0.1:80

VirtualHost 127.0.0.1:80 so on and so forth..


But still i am denied... is there something i am missing? I have
followed all on whats the SQUID FAQ, Thank you very much in advance who
would like to help..


Sincerely,

J.N. Nengasca



-- 
All messages that are coming from this domain
is certified to be virus and spam free.  If
ever you have received any virus infected 
content or spam, please report it to the
internet administrator of this domain 
[EMAIL PROTECTED]




RE: [squid-users] DENIED using httpd acceleration

2006-02-15 Thread Gregori Parker
Sorry - my message got garbled...it should have read:


Make sure you have the following in place!

acl all src 0.0.0.0/0.0.0.0
acl localhost dst 127.0.0.1/255.255.255.255
acl acceleratedPort port 80
http_access allow all
http_access allow localhost acceleratedPort
http_reply_access allow all


AND THESE!

httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header off


For clarity: are you getting a denied page from Squid or from Apache?


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 15, 2006 5:10 PM
To: squid-users@squid-cache.org
Subject: [squid-users] DENIED using httpd acceleration

hi,

I just configured my web server to use squid's feature using
httpd_acceleration, but incoming request to SQUID have been denied, i
even use 'http_access allow all' but no luck..

squid.conf

http_port 80
http_accel_port 80
http_accel_host 127.0.0.1


---httpd.conf

Listen 127.0.0.1:80

So i tried different approach since i am using vhosts

--squid.conf

http_port 80
http_accel_port 80
http_accel_host virtual

---httpd.conf

Listen 127.0.0.1:80

NameVirtualHost 127.0.0.1:80

VirtualHost 127.0.0.1:80 so on and so forth..


But still i am denied... is there something i am missing? I have
followed all on whats the SQUID FAQ, Thank you very much in advance who
would like to help..


Sincerely,

J.N. Nengasca



-- 
All messages that are coming from this domain
is certified to be virus and spam free.  If
ever you have received any virus infected 
content or spam, please report it to the
internet administrator of this domain 
[EMAIL PROTECTED]





RE: [squid-users] squid logging

2006-02-10 Thread Gregori Parker
AWESOME - thanks mate!

One more question regarding this...

I'm trying to get the date format looking like 2006-02-10 but I only seem to 
have options for 10/Feb/2006:11:00:00 - - any ideas?

Also, the doc claims that %rq is a valid token for the query line, but when I 
have it in my config, squid wont start; it just tells me:

FATAL: Can't parse configuration token: '%rq %a %st'
 

Utlimately, I'm looking to get the logs like this...

2006-02-10 09:00:00 /path/to/file/without/hostname/filename.swf 
query=stringwithout=thequestion=mark 123.45.67.89 22148

(resembling IIS/W3C logging, as much as it pains me)


-Original Message-
From: Kevin [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 09, 2006 10:30 PM
To: Gregori Parker
Cc: Squid ML
Subject: Re: [squid-users] squid logging

On 2/9/06, Gregori Parker [EMAIL PROTECTED] wrote:
 I currently have Squid logging to access.log in httpd
 emulation...unfortunately, our origin servers log in W3C format.  We're
 working to make our parsers smart enough to handle it, but I thought
 it's worth asking: Are there any other controls over the format of
 access.log besides emulate_httpd_log?  Perhaps a patch or module?

 I would LOVE to have the ability to designate what fields get logged so
 I can trim the fat :)  Thanks in advance - Gregori

Yes, there is a patch which gives full control.

See the custom log patch, found from
http://devel.squid-cache.org/old_projects.html#customlog

I've been using it for many months now, no problems.

Kevin



[squid-users] squid logging

2006-02-09 Thread Gregori Parker
I currently have Squid logging to access.log in httpd
emulation...unfortunately, our origin servers log in W3C format.  We're
working to make our parsers smart enough to handle it, but I thought
it's worth asking: Are there any other controls over the format of
access.log besides emulate_httpd_log?  Perhaps a patch or module?

I would LOVE to have the ability to designate what fields get logged so
I can trim the fat :)  Thanks in advance - Gregori



RE: [squid-users] Performance problems - need some advice

2006-02-07 Thread Gregori Parker
Yes, please keep it on the squid-list...I for one am interested in this thread.

I just deployed 3 squid servers in a similar configuration (reverse-proxy 
serving large media files)...except each server of ours is dual 3Ghz Xeon, 
64-bit everything, 4GB RAM and around a TB each of dedicated cache space (aufs 
on ext2 with noatime option).  They are running Squid 2.5 STABLE12 on Fedora 
Core 4 x86_64.  Disk performance looks fine to me, but I'm concerned because 
top reports that squid is averaging 70% cpu usage most of the time.

Can anyone recommend techniques for assessing squid performance?  I have no 
good way of benchmarking our clusters since SNMP isnt ready quite yet.  Please 
don't mention cache_mgr, thanks :)
 

-Original Message-
From: Kinkie [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 07, 2006 3:23 PM
To: Jeremy Utley
Cc: Squid ML
Subject: Re: [squid-users] Performance problems - need some advice

On Tue, 2006-02-07 at 12:49 -0800, Jeremy Utley wrote:
 On 2/7/06, Kinkie [EMAIL PROTECTED] wrote:
 
  Profiling your server would be the first step.
  How does it spend its CPU time? Within the kernel? Within the squid
  process? In iowait? What's the number of open filedescriptors in Squid
  (you can gather that from the cachemgr)? And what about disk load? How
  much RAM does the server have, how much of it is used by squid?
 
 I was monitoring the servers as we brought them online last night in
 most respects - I wasn't monitoring file descriptor usage, but I do
 have squid patched to support more than the standard number of file
 descriptors, and am using the ulimit command according to the FAQ. 

That can be a bottleneck if you're building up a SYN backlog. Possible
but relatively unlikely.

 When I was monitoring, squid was still building it's cache, and squid
 was using most of the system memory at that time.  It seems our major
 bottleneck is in Disk I/O - if squid can fulfill a request out of
 memory, everything is fine, but if it has to go to the disk cache,
 performance suffers.

That can be expected to a degree. So are you seeing lots of IOWait in
the system stats?

   Right now, we have 5 18GB SCSI disks placing our
 cache, 2 of those are on the primary SCSI controller with the OS disk,
 the other 3 on the secondary.

How are the cache disks arranged? RAID? No RAID (aka JBOD)?

   Could there perhaps be better
 performance with one larger disk on one controller with the OS disk,
 and another larger disk on the secondary controller?

No, in general more spindles are good because they can perform in
parallel. What kind of cache_dir system are you using? aufs? diskd?

 We're also
 probably a little low on RAM in the machines - each of the 2 current
 squid servers have 2GB of ram installed.

I assume that you're serving much more content than that, right?

 Right now, we have 4 Apache servers in a cluster, and these machines
 currently max out at about 300Mb/s.  Our hope is to utilize squid to
 push this up to about 500Mb/s, if possible.  Has anyone out there ever
 gotten a squid server to push that kind of traffic?  Again, the files
 served from these servers range from a few hundred KB to around 4MB in
 size.

In raw terms, Apache should outperform Squid due to more specific OS
support. Squid outperforms Apache in flexibility, manageability and by
offering more control over the server and what the clients can and
cannot do.

Please keep the discussion on the mailing-list. It helps get more ideas
and also it can provide valuable feedback for others who might be
interested in the same topics.

-- 
Kinkie [EMAIL PROTECTED]



RE: [squid-users] Squid and iptables - need help

2006-02-06 Thread Gregori Parker
Thanks Chris, I got rid of a lot of redundancy and replaced general
rules much more specific ones (e.g. SSH et al have source/destination ip
space constraints)...everything seems to be working fine now!


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 06, 2006 10:59 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid and iptables - need help

Hi...

 -Original Message-
 From: Gregori Parker [mailto:[EMAIL PROTECTED]
 Sent: Friday, February 03, 2006 10:25 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid and iptables - need help
 
 
 I have just deployed a cluster of squid caching servers in 
 reverse proxy
 mode, and am having trouble with iptables.  When iptables is 
 turned on,
 I can hit the caching servers, but squid times out trying to pull from
 the origin servers (in our other datacenters).
 
 I'm thinking that if I add outgoing rules for our other datacenters,
 everything should be fine, but they are now in production and I cant
 simply test at will...I'm planning on adding these lines, can anyone
 tell me if this will fix my timeout problem when squid tries to pull
 from the origin servers?  I'm green on iptables configuration, so any
 advice in general is welcome!  Sorry for the long email, and 
 thank you!
 
 Lines I plan to add:
 
 # Allow anything *to* our various datacenters
 $IPTABLES -A OUTPUT -d XX.XX.XXX.XXX/26 -p all -j ACCEPT
 $IPTABLES -A OUTPUT -d XX.XX1.XXX.X/26 -p all -j ACCEPT
 $IPTABLES -A OUTPUT -d XX.XX.XX.X/26 -p all -j ACCEPT
 

Replace. Don't add...

 
 Or maybe I can just add this instead:
 
 $IPTABLES -A OUTPUT -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
 

This would be the same thing as $IPTABLES --policy OUTPUT ALLOW.

 
 Here's the current iptables script:
 --
 --
 -
 #!/bin/sh
 
 LAN=eth1
 INTERNET=eth0
 IPTABLES=/sbin/iptables
 
 # Drop ICMP echo-request messages sent to broadcast or multicast
 addresses
 echo 1  /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
 
 # Drop source routed packets
 echo 0  /proc/sys/net/ipv4/conf/all/accept_source_route
 
 # Enable TCP SYN cookie protection from SYN floods
 echo 1  /proc/sys/net/ipv4/tcp_syncookies
 
 # Don't accept ICMP redirect messages
 echo 0  /proc/sys/net/ipv4/conf/all/accept_redirects
 
 # Don't send ICMP redirect messages
 echo 0  /proc/sys/net/ipv4/conf/all/send_redirects
 
 # Enable source address spoofing protection
 echo 1  /proc/sys/net/ipv4/conf/all/rp_filter
 
 # Log packets with impossible source addresses
 echo 1  /proc/sys/net/ipv4/conf/all/log_martians
 
 # Flush all chains
 $IPTABLES --flush
 
 # Allow unlimited traffic on the loopback interface
 $IPTABLES -A INPUT -i lo -j ACCEPT
 $IPTABLES -A OUTPUT -o lo -j ACCEPT
 
 # Set default policies
 $IPTABLES --policy INPUT DROP
 $IPTABLES --policy OUTPUT DROP
 $IPTABLES --policy FORWARD DROP
 
 # Previously initiated and accepted exchanges bypass rule checking
 $IPTABLES -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 $IPTABLES -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 

Change these lines...

 # Allow anything from our various datacenters
 $IPTABLES -A INPUT -s XX.XX.XXX.XXX/26 -p all -j ACCEPT
 $IPTABLES -A INPUT -s XX.XX1.XXX.X/26 -p all -j ACCEPT
 $IPTABLES -A INPUT -s XX.XX.XX.X/26 -p all -j ACCEPT
 

...to...

# Allow anything from our various datacenters
$IPTABLES -A OUPUT -d XX.XX.XXX.XXX/26 -p all -j ACCEPT
$IPTABLES -A OUPUT -d XX.XX1.XXX.X/26 -p all -j ACCEPT
$IPTABLES -A OUPUT -d XX.XX.XXX.X/26 -p all -j ACCEPT

... and Squid will be able to query your datacenters.  Responses will be
allowed by the --state ESTABLISHED,RELATED rule.  It would probably be
a good idea to make this rule a bit more stringent (only allow TCP on
port 80, or what-have-you).  But it's a good start.

 # Allow incoming port 22 (ssh) connections on external interface
 $IPTABLES -A INPUT -i $INTERNET -p tcp --destination-port 22 
 -m state \
 --state NEW -j ACCEPT
 

I'd REALLY strongly recommend you limit which hosts can connect to port
22.  There are no shortage of SSH scanners in the wild.

 # Allow incoming port 4827 (squid-htcp) connections on external
 interface
 $IPTABLES -A INPUT -i $INTERNET -p tcp --destination-port 
 4827 -m state
 \
 --state NEW -j ACCEPT
 
 # Allow incoming port 80 (http) connections on external interface
 $IPTABLES -A INPUT -i $INTERNET -p tcp --destination-port 80 
 -m state \
 --state NEW -j ACCEPT
 
 # Allow ICMP ECHO REQUESTS
 $IPTABLES -A INPUT -i $INTERNET -p icmp --icmp-type echo-request -j
 ACCEPT
 $IPTABLES -A INPUT -p icmp -j ACCEPT
 $IPTABLES -A OUTPUT -p icmp -j ACCEPT
 
 
 # Allow DNS resolution
 $IPTABLES -A OUTPUT -o $INTERNET -p udp --destination-port 53 
 -m state \
 --state NEW -j ACCEPT
 $IPTABLES -A OUTPUT -o $INTERNET -p tcp --destination-port 53 
 -m state \
 --state NEW -j ACCEPT
 
 # Allow ntp synchronization
 $IPTABLES -A OUTPUT -o

[squid-users] Squid and iptables - need help

2006-02-03 Thread Gregori Parker
I have just deployed a cluster of squid caching servers in reverse proxy
mode, and am having trouble with iptables.  When iptables is turned on,
I can hit the caching servers, but squid times out trying to pull from
the origin servers (in our other datacenters).

I'm thinking that if I add outgoing rules for our other datacenters,
everything should be fine, but they are now in production and I cant
simply test at will...I'm planning on adding these lines, can anyone
tell me if this will fix my timeout problem when squid tries to pull
from the origin servers?  I'm green on iptables configuration, so any
advice in general is welcome!  Sorry for the long email, and thank you!

Lines I plan to add:

# Allow anything *to* our various datacenters
$IPTABLES -A OUTPUT -d XX.XX.XXX.XXX/26 -p all -j ACCEPT
$IPTABLES -A OUTPUT -d XX.XX1.XXX.X/26 -p all -j ACCEPT
$IPTABLES -A OUTPUT -d XX.XX.XX.X/26 -p all -j ACCEPT


Or maybe I can just add this instead:

$IPTABLES -A OUTPUT -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT


Here's the current iptables script:

-
#!/bin/sh

LAN=eth1
INTERNET=eth0
IPTABLES=/sbin/iptables

# Drop ICMP echo-request messages sent to broadcast or multicast
addresses
echo 1  /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts

# Drop source routed packets
echo 0  /proc/sys/net/ipv4/conf/all/accept_source_route

# Enable TCP SYN cookie protection from SYN floods
echo 1  /proc/sys/net/ipv4/tcp_syncookies

# Don't accept ICMP redirect messages
echo 0  /proc/sys/net/ipv4/conf/all/accept_redirects

# Don't send ICMP redirect messages
echo 0  /proc/sys/net/ipv4/conf/all/send_redirects

# Enable source address spoofing protection
echo 1  /proc/sys/net/ipv4/conf/all/rp_filter

# Log packets with impossible source addresses
echo 1  /proc/sys/net/ipv4/conf/all/log_martians

# Flush all chains
$IPTABLES --flush

# Allow unlimited traffic on the loopback interface
$IPTABLES -A INPUT -i lo -j ACCEPT
$IPTABLES -A OUTPUT -o lo -j ACCEPT

# Set default policies
$IPTABLES --policy INPUT DROP
$IPTABLES --policy OUTPUT DROP
$IPTABLES --policy FORWARD DROP

# Previously initiated and accepted exchanges bypass rule checking
$IPTABLES -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
$IPTABLES -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allow anything from our various datacenters
$IPTABLES -A INPUT -s XX.XX.XXX.XXX/26 -p all -j ACCEPT
$IPTABLES -A INPUT -s XX.XX1.XXX.X/26 -p all -j ACCEPT
$IPTABLES -A INPUT -s XX.XX.XX.X/26 -p all -j ACCEPT

# Allow incoming port 22 (ssh) connections on external interface
$IPTABLES -A INPUT -i $INTERNET -p tcp --destination-port 22 -m state \
--state NEW -j ACCEPT

# Allow incoming port 4827 (squid-htcp) connections on external
interface
$IPTABLES -A INPUT -i $INTERNET -p tcp --destination-port 4827 -m state
\
--state NEW -j ACCEPT

# Allow incoming port 80 (http) connections on external interface
$IPTABLES -A INPUT -i $INTERNET -p tcp --destination-port 80 -m state \
--state NEW -j ACCEPT

# Allow ICMP ECHO REQUESTS
$IPTABLES -A INPUT -i $INTERNET -p icmp --icmp-type echo-request -j
ACCEPT
$IPTABLES -A INPUT -p icmp -j ACCEPT
$IPTABLES -A OUTPUT -p icmp -j ACCEPT


# Allow DNS resolution
$IPTABLES -A OUTPUT -o $INTERNET -p udp --destination-port 53 -m state \
--state NEW -j ACCEPT
$IPTABLES -A OUTPUT -o $INTERNET -p tcp --destination-port 53 -m state \
--state NEW -j ACCEPT

# Allow ntp synchronization
$IPTABLES -A OUTPUT -o $INTERNET -p udp --destination-port 123 -m state
\
--state NEW -j ACCEPT

# allow anything on the trusted interface
$IPTABLES -A INPUT -i $LAN -p all -j ACCEPT
$IPTABLES -A OUTPUT -o $LAN -p all -j ACCEPT

# Have these rules take effect when iptables is started
/sbin/service iptables save

--



RE: [squid-users] Reverse proxy to different servers for different urls

2006-02-01 Thread Gregori Parker
You can use a redirector to do this.  Check out Squirm 
http://squirm.foote.com.au/ (what I use, you can write your own pretty easily 
if you want) and use a regular expression like...

/^(.*?)domainname\.com\/application1\/(.*)$/\1server1\/\2/

That would take a url of *.domainname.com/* and change it to *.server1/* 
without the application1 portion.

Redirectors: http://www.squid-cache.org/Doc/FAQ/FAQ-15.html

 


-Original Message-
From: Tim McAuley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 01, 2006 5:40 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Reverse proxy to different servers for different urls

Hi,

I have been looking through the documentation and mailing archives and
have not found a clear way to do what I want.

What I want is:

Using a reverse proxy, I would like to be able to redirect requests to
different backend servers depending on the url being used (not the server
name).

I know it is possible to configure squid to work with multiple domains and
point the request to different servers according to the domain name, so I
want is pretty similiar, except using the same domain name all the time.

Example:
http://domainname.com/application1/index.html - squid - server1
(http://server1/index.html)

http://domainname.com/application2/index.html - squid - server2
(http://server2/index.html)

Ideally, stripping out application1/2 from the url but not necessary.

Is this possible?

I'm running on Squid 2.5, pre-compiled for windows (on windows 2000).

Also, the incoming requests are https and these are converted to http for
the final server. So after the ssl is decrypted, squid should be able to
see the url in the request (I assume).

Any hints greatfully received.

Many thanks,

Tim








RE: [squid-users] RAID, 64bit and cache_dir size

2006-01-13 Thread Gregori Parker

Thanks for the answers...I forgot to mention that this deployment of
squid will be used to accelerate back-end servers and geographically
extend our CDN.  I was planning to use RAID5, am I hearing this is okay
for non-proxy implementations?


-Original Message-
From: Gregori Parker 
Sent: Thursday, January 12, 2006 9:51 AM
To: squid-users@squid-cache.org
Subject: [squid-users] RAID, 64bit and cache_dir size


I've been reading up in preparation for a deployment of Squid into a
large enterprise cluster to extend our CDN, and I have been unable to
determine solid answers for the following questions.  Thanks in advance
for any insight you guys can provide.

  I have read that RAID is a bad idea for squid caches, however I am
unable to find any reasoning for this aside from performance concerns.
I'm using aufs and don't really see a hit between cache_dir's on a fixed
disk and those on an array...but then again, it's possible I'm not
examining the right metrics.  Has anyone had any problems with putting
their cache_dir on a RAID?

  Has anyone had any issues running Squid in a 64-bit environment?  I
plan to use Fedora Core 4 x86_64 and was wonding if anyone had any
experiences (good or bad) with this.

  Finally, I'm interested in what was just asked about large
cache_dir's: Is it better to have one large cache_dir (1 TB for example)
or multiple smaller cache_dir's (5 x 200 GB) - I'm mostly concerned with
performance and number of file descriptors.  Each server will have 4 GB
of RAM, which according to my math, should be plenty for this large of a
cache...also worth noting that cached objects will be a minimum of
around 500KB each.

Thanks again,
Gregori




[squid-users] RAID, 64bit and cache_dir size

2006-01-12 Thread Gregori Parker

I've been reading up in preparation for a deployment of Squid into a
large enterprise cluster to extend our CDN, and I have been unable to
determine solid answers for the following questions.  Thanks in advance
for any insight you guys can provide.

  I have read that RAID is a bad idea for squid caches, however I am
unable to find any reasoning for this aside from performance concerns.
I'm using aufs and don't really see a hit between cache_dir's on a fixed
disk and those on an array...but then again, it's possible I'm not
examining the right metrics.  Has anyone had any problems with putting
their cache_dir on a RAID?

  Has anyone had any issues running Squid in a 64-bit environment?  I
plan to use Fedora Core 4 x86_64 and was wonding if anyone had any
experiences (good or bad) with this.

  Finally, I'm interested in what was just asked about large
cache_dir's: Is it better to have one large cache_dir (1 TB for example)
or multiple smaller cache_dir's (5 x 200 GB) - I'm mostly concerned with
performance and number of file descriptors.  Each server will have 4 GB
of RAM, which according to my math, should be plenty for this large of a
cache...also worth noting that cached objects will be a minimum of
around 500KB each.

Thanks again,
Gregori