Re: [squid-users] Can you run two instances of Squid?

2008-06-12 Thread David Lawson
Make sure you have a different PID file, among other things, defined.   
I'd guess that's your problem though.


--Dave
On Jun 12, 2008, at 4:59 PM, Michael St. Laurent wrote:


Is there a way to run a second instance of Squid?  I've specified a
different config file for the other instance but it refused to start
because one instance was already running.

--
Michael St. Laurent
Hartwell Corporation


Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]





Re: [squid-users] Request processing question

2008-04-06 Thread David Lawson


On Apr 6, 2008, at 4:59 AM, Henrik Nordstrom wrote:

lör 2008-04-05 klockan 23:26 -0400 skrev David Lawson:

I've got a couple questions about how Squid chooses to fulfill a
request.  Basically, I've got a cache with a number of sibling peers
defined.  Some of the time it makes an ICP query to those peers and
then does everything it should do, takes the first hit, makes the  
HTTP
request for the object via that peer, etc.  Some, perhaps most, of  
the

time, it doesn't even make an ICP query for the object, it just goes
direct to the origin server.


The primary distinction is hierarchical/nonhierarchical requests.
Siblings is only queried on hierarchical requests.

non-hierarchical:
 - reload requests
 - cache validations if you have non-Squid ICP peers
 - non-GET/HEAD/TRACE requests
 - authenticated requests
 - matching hierarchy_stoplist


Hmmm, okay, that was more or less the assumption I was working under,  
but the behavior I'm seeing doesn't seem to match that.  One of my  
coworkers did a packet capture of two requests, one of which resulted  
in an ICP query, the other of which bypassed the ICP query process  
entirely and went direct to the origin.


ICP:

   GET http://www.foo.com:8881/towns/baz/x1151547945 HTTP/1.0\r\n
   Request Method: GET
   Request URI: http://www.foo.com:8881/towns/baz/x1151547945
   Request Version: HTTP/1.0
   Host: www.foo.com:8881\r\n
   Accept: text/html,text/plain,application/*\r\n
   From: [EMAIL PROTECTED]
   User-Agent: gsa-crawler (Enterprise; GIX-01642; [EMAIL PROTECTED])\r\n
   Accept-Encoding: gzip\r\n
   If-Modified-Since: Sun, 16 Mar 2008 22:22:39 GMT\r\n
   Via: 1.0 cache2.ghm.zope.net:80 (squid/2.5.STABLE12)\r\n
   X-Forwarded-For: 64.233.190.112\r\n
   Cache-Control: max-age=86400\r\n
   \r\n

Non-ICP:

Hypertext Transfer Protocol
   GET http://www.bar.com:8881/baz/news/rss HTTP/1.0\r\n
   Request Method: GET
   Request URI: http://www.bar.com:8881/baz/news/rss
   Request Version: HTTP/1.0
   Host: www.wickedlocal.com:8881\r\n
   User-Agent: Yahoo-Newscrawler/3.9 (news-search-crawler at yahoo- 
inc dot com)\r\n

   Via: 1.0 cache4.ghm.zope.net:80 (squid/2.5.STABLE12)\r\n
   X-Forwarded-For: 69.147.86.154\r\n
   Cache-Control: max-age=86400\r\n
   \r\n

Any ideas about why those requests were processed differently?


I've also got a broader, more general question of how a request flows
through the Squid process, when ACLs are processed, are they before  
or

after any rewriter is done to the URLs, etc., but that's a really
secondary thing, right now I'm just concerned with the ICP question.


Depends on which access directive you look at. Generally speaking
http_access is before url rewrites, the rest after.



Ah, okay.  Thanks Henrik, I appreciate the info.

--Dave
Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]





[squid-users] Request processing question

2008-04-05 Thread David Lawson
I've got a couple questions about how Squid chooses to fulfill a  
request.  Basically, I've got a cache with a number of sibling peers  
defined.  Some of the time it makes an ICP query to those peers and  
then does everything it should do, takes the first hit, makes the HTTP  
request for the object via that peer, etc.  Some, perhaps most, of the  
time, it doesn't even make an ICP query for the object, it just goes  
direct to the origin server.  Can anyone tell me why that is and how  
to stop it?  I'd like Squid to, at the very least, make the query for  
every request.  Can anyone point me in the right direction?  This is  
Squid 2.5  STABLE12, by the way.


I've also got a broader, more general question of how a request flows  
through the Squid process, when ACLs are processed, are they before or  
after any rewriter is done to the URLs, etc., but that's a really  
secondary thing, right now I'm just concerned with the ICP question.


--Dave
Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]





Re: [squid-users] force caching (or High availability config)

2007-12-13 Thread David Lawson


On Dec 13, 2007, at 3:57 PM, Dmitry S. Makovey wrote:



Hi,

I have squid configured as a transparent proxy in front of  
application server
(ApS). Data generated by ApS gets updated infrequently and sometimes  
ApS gets
slow doing it's internal housecleaning. What I want to do is for  
Squid
to fudge response times a bit by timing out connections to ApS  
after, say
20s and using cached data instead (even if it's outdated). This  
would also
help with ApS reboots so that data is available at all times  
regardless of

responsiveness or availability of ApS.

Looking through documentation and Google searches didn't bring up  
any relevant

information.

I do realize that this violates HTTP and is not widely applicable  
but in my

situation I can live with consequences (I think).


This is actually a feature we've been interested in as well.  As far  
as I know, there's no way to do this in Squid right now, though it was  
discussed before by one of my co-workers and apparently there was a  
similar feature being developed, I don't know if that ever made it  
into the mainline code or not, I'm sure one of the developers can  
comment.


What we've done instead is leverage offline mode so that if the  
application servers get themselves into a state where they wont reply  
in a timely manner, the caches are automatically toggled into offline  
mode by a watchdog daemon.  That might, depending on your  
configuration and your ability to monitor your application server's  
state, be an option you can consider in lieu of doing it entirely in  
Squid.


--Dave
Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]





Re: [squid-users] Squid Running out of Disk space

2007-09-26 Thread David Lawson
Depending on your file system options, linux generally reserves 5% of  
space for the super user.  What you need to do is find out how much  
space is being used by everything other than Squid, and how much  
space, minus that reserved five percent, you have available, and  
calibrate your disk cache size appropriately.  I'd consider just  
knocking ten gig off that number and going from there.


--Dave
On Sep 26, 2007, at 11:47 AM, Abdock wrote:

Space is the issue, i did df and it was nearing 95% and the squid  
stops with the disk out error.


-Original Message-
From: Martin A. Brooks [mailto:[EMAIL PROTECTED]
Sent: 26 September 2007 17:47
To: Abdock
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Running out of Disk space

Abdock wrote:

Hello All,

I have a single HDD, 72Gb and have configured Squid with the below  
parameters, but it just runs out of disk space since


SQUID 2.6 STABLE 15

cache_dir aufs /usr/local/squid/var/cache 5 16 256


After a week the box runs out of Disk Space.

Can anybody help on this one ?


df -i

You're probably out of inodes.

--

 Martin A. Brooks |  http://www.antibodymx.net/ | Anti-spam  anti- 
virus

Consultant|  [EMAIL PROTECTED]  | filtering. Inoculate
  antibodymx.net  |  m: +447896578023   | your mail system.




Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]





Re: [squid-users] LVS Reverse Proxy Squid

2007-09-18 Thread David Lawson
I use a similar setup, what you want to do is have multiple  
squid.conf files for each instance, with each instance listening on a  
different http_port and icp_port, then point your real servers at the  
appropriate instances.  It's worked out very well for me.


--Dave
On Sep 18, 2007, at 2:42 PM, Brad Taylor wrote:

We use LVS (load balancer) to send traffic to multiple Squid 2.5  
servers

in reverse proxy mode. We want to put multiple Squid instances on one
box and have successful done that by changing: http_port 80 to  
http_port

192.168.60.7:80 in the squid.conf file. We tested to that instance of
squid and worked successfully. Once it is added to the LVS load  
balancer

the site no longer works. I'll check with the LVS group also.





Re: [squid-users] LVS Reverse Proxy Squid

2007-09-18 Thread David Lawson


On Sep 19, 2007, at 12:00 AM, Ding Deng wrote:


Brad Taylor [EMAIL PROTECTED] writes:


We use LVS (load balancer) to send traffic to multiple Squid 2.5
servers in reverse proxy mode. We want to put multiple Squid  
instances

on one box and have successful done that by changing: http_port 80 to
http_port 192.168.60.7:80 in the squid.conf file. We tested to that


Squid is listening only on a private address now, what will the source
address of response from Squid be?


LVS NAT's outbound responses, as long as the response to a client  
request goes from the cache through the load balancer, it'll be NATed  
fine.


instance of squid and worked successfully. Once it is added to the  
LVS

load balancer the site no longer works. I'll check with the LVS group
also.


You need as many public addresses as number of Squid instances you'd
like to run in a single box, and configure each instance to listen  
on a

different public address, e.g.:


This is untrue in an LVS environment, though true if the Squids are  
bare on the network.  In the case where you're load balancing with  
LVS, the simplest way to achieve this is to have each squid instance  
simply listen on a unique port.  Instance A on port 80, Instance B on  
port 81, etc.  The set up the LVS VIPs and RIPs to direct traffic  
appropriately.


VIP A: 1.1.1.1:80
RIP A: 2.2.2.2:80
RIP A: 2.2.2.3:80

VIP B: 1.1.1.2:80
RIP B: 2.2.2.2:81
RIP B: 2.2.2.3:81

Etc.  This assumes you're using LVS NAT routing, for DR and TUN  
there's some details that are slightly different, but the basic  
concept is the same.  I'll be more than happy to answer Brad's  
specific questions about the LVS/Squid relationship in more depth off  
list if he wants, since this is really less a Squid question and more  
a How do I make LVS and Squid play well together? question.


--Dave
Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]





Re: [squid-users] How to override expires, maxage, s-maxage on reverse proxy?

2007-07-29 Thread David Lawson


On Jul 29, 2007, at 1:47 PM, Michael Pye wrote:


Ricardo Newbery wrote:
latest version.  But I'm not sure I should do this via s-maxage  
in the
response as this setting might also apply to other proxies  
upstream of

me.


If you want other caches to take note of the cache-control max age
headers, but you want your cache to cache for longer then set a  
minimum

expiry time in a refresh_pattern for your site. I believe the minimum
expiry time will override the cache-control header.


IIRC, refresh patterns are only applicable to objects that don't  
return enough information in the headers to determine the freshness  
or staleness of an object, i.e. they don't have a LastModified and a  
max-age or expires header.  So an object with those headers set  
wouldn't be affected by a refresh pattern.  You can use the override  
expires option in the refresh pattern to change that behavior.  I may  
be wrong, please correct me if I am.


--Dave Lawson
Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]