i want to do referer logging, but only for specific domains, not all
of my traffic. is this possible using acl's? i'm using squid 2.5
stable 13.
hi, if i change the size of a cache_dir in squid.conf, do i have to
re-initialize the dir with squid -z?
What could cause a line in access.log like the following?
[13/Jul/2006:20:55:04 +] "GET
http://p.foo.net/ph/31/1/38/mlasw87/1151964243_t.jpg HTTP/1.0" 504 510
TCP_REFRESH_MISS:NONE
The object in question should be cacheable for a long time (Expires is set
to 5 years), and it's caching fine,
squid-users, i hope you can save me once again :) i've been getting a
lot of the errors below. does this look like something i can fix with
reconfiguration or recompilation?
2006/07/04 20:59:42| WARNING: Disk space over limit: 440086904 KB > 432410624 KB
2006/07/04 20:59:53| WARNING: Disk space o
Squid seems to have a bug with Expires and Date headers:
It fetches an object and caches the headers.
The object expires, and Squid fetches it again.
The object is unmodified, so Squid continues to use the cached object.
However, it appears that it also continues to return the old Expires
and Dat
woops, i see that the FAQ has covered this question...
"With Squid-2, you will not lose your existing cache. You can add and
delete cache_dirs without affecting any of the others."
so to be more specific, is this still the case with 2.5STABLE13?
On 6/19/06, lawrence wang <[EM
If I have multiple cache_dirs on separate drives, and one drive fails,
so I edit squid.conf to remove that cache_dir, how are the others
affected? Will I be able to continue using the cached objects in the
other cache_dirs, or do I have to rebuild the cache from scratch?
Hi, is it possible that squid performs worse when handling large
volumes of If-Modified-Since requests instead of normal GET's? I have
two Squid servers running at around 1000 requests per second; the only
difference in the traffic pattern i can discern is that 50% of
requests on one server are IM
Hello,
Are there any ill (or good) effects of running squid -z on cache
directories which have already been initialized? I'm writing a deploy
script and it's more convenient for me to always run "squid -z", but i
want to make sure this won't clear my cache or anything like that.
Thanks!
Lawrence
I've noticed that Squid is having trouble caching an object which has
a Set-Cookie header. Here are the object's headers:
Date: Thu, 25 May 2006 15:34:09 GMT
Server: Apache/2.0.54 (Debian GNU/Linux) mod_ssl/2.0.54 OpenSSL/0.9.7e
mod_apreq2-20051231/2.5.7 mod_perl/2.0.2 Perl/v5.8.4
Set-Cookie: NNN
i see. so i can work around this for now by splitting my cache_dir
into multiple ones? it's currently set at 300GB.
On 5/19/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
fre 2006-05-19 klockan 11:03 -0400 skrev lawrence wang:
> today i got a squid crash with this entry in the lo
2.5STABLE13, with epoll patch.
On 5/19/06, Mark Elsen <[EMAIL PROTECTED]> wrote:
> today i got a squid crash with this entry in the log:
>
> 2006/05/19 11:38:16| assertion failed: filemap.c:78: "fm->max_n_files
> <= (1 << 24)"
>
> does this mean that squid has a limit on the maximum number of fi
today i got a squid crash with this entry in the log:
2006/05/19 11:38:16| assertion failed: filemap.c:78: "fm->max_n_files
<= (1 << 24)"
does this mean that squid has a limit on the maximum number of files
that can be in the cache?
that reminds me -- i'm using --enable-dlmalloc, Doug Lea's malloc lib.
could that be the issue?
On 4/21/06, Mark Elsen <[EMAIL PROTECTED]> wrote:
> > I've seen the answer to this in the FAQ. However,
> >
> > 1) I am definitely not running out of swap, and
> > 2) "ulimit -HSd" reports that the max
I've seen the answer to this in the FAQ. However,
1) I am definitely not running out of swap, and
2) "ulimit -HSd" reports that the max segment size is set to unlimited
by default.
I am seeing this behavior consistently on a number of boxes when they
get to a little over 1GB of resident memory us
I've got Squid-2.5.STABLE13 with the epoll patch, and I'm getting a
lot of "clearing ENTRY_DEFER_READ" messages in my cache.log. Is this
something I should be concerned about, or just a debug message at the
wrong verbosity level?
2.5.STABLE12, no patches.
On 3/29/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> mån 2006-03-27 klockan 13:03 -0500 skrev lawrence wang:
> > i get the following in squid_cache.log when i rotate:
> >
> > 2006/03/27 18:01:30| storeDirWriteCleanLogs: Starting
Hi,
I've been having an issue with high cpu load (70-80%) with traffic
levels of about 150-200 requests per second, average object size 4kb,
99% cache hits.
Today I tried setting "client_persistent_connections off" in
squid.conf, and the average number of open connections dropped, of
course, down
i get the following in squid_cache.log when i rotate:
2006/03/27 18:01:30| storeDirWriteCleanLogs: Starting...
2006/03/27 18:01:31| Finished. Wrote 5793 entries.
2006/03/27 18:01:31| Took 0.0 seconds (2736419.5 entries/sec).
2006/03/27 18:01:31| logfileRotate: /var/log/cdn/http/squid_store.lo
006-03-18 klockan 08:41 -0500 skrev lawrence wang:
> > I see. But maybe I've phrased this wrong... It seems like when the
> > purge tool runs, it does find all the different variants for a given
> > URL and runs requests against each of them; of course the variants
> > which
hi, i was testing out squid -k rotate on squid-2.5STABLE12, and i
notice that cache.log and store.log rotate ok (*.0 files are created),
but access.log doesn't; furthermore, if i restart the server,
access.log is emptied, so i lose my old logs.
if i rename the file after running rotate, it will ke
it seems like Squid can't help but be CPU limited when it's serving
very small objects (<1KB) from memory, which is my situation.
On 3/21/06, Chris Robertson <[EMAIL PROTECTED]> wrote:
> lawrence wang wrote:
>
> >Is there a way to have Squid 2.5STABLE12 take advanta
Is there a way to have Squid 2.5STABLE12 take advantage of multiple
CPU's? Thanks in advance for any advice or suggestions.
t. Perhaps there's a way to relax this check without
breaking anything else?
On 3/18/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> fre 2006-03-17 klockan 17:11 -0500 skrev lawrence wang:
> > I was wondering,
> > since this is a significant hassle, if anyone's writt
akes Squid purge all variants under a given URL, something that would
then be usable with the existing third-party purge tool. And if not,
can anyone point me in the general direction of the code I might want
to start digging into to roll my own patch? Thanks in advance.
--Lawrence Wang
25 matches
Mail list logo