[squid-users] can referer logging be filtered by acl?

2006-09-26 Thread Lawrence Wang

i want to do referer logging, but only for specific domains, not all
of my traffic. is this possible using acl's? i'm using squid 2.5
stable 13.


[squid-users] changing cache_dir size

2006-09-19 Thread Lawrence Wang

hi, if i change the size of a cache_dir in squid.conf, do i have to
re-initialize the dir with squid -z?


[squid-users] 504 and TCP_REFRESH_MISS:NONE

2006-07-13 Thread lawrence wang

What could cause a line in access.log like the following?

[13/Jul/2006:20:55:04 +] "GET
http://p.foo.net/ph/31/1/38/mlasw87/1151964243_t.jpg HTTP/1.0" 504 510
TCP_REFRESH_MISS:NONE

The object in question should be cacheable for a long time (Expires is set
to 5 years), and it's caching fine, and I have ignore_reload turned on, so
that clients can't force a refresh. And there's that 504 Gateway Timeout,
even though I can curl the URL with no problem.

Squid is configured as part of a cache hierarchy -- might that cause the
problem? It has three siblings and three children (who are also the children
of its siblings).

Thanks in advance!
Lawrence


[squid-users] disk space over limit

2006-07-05 Thread lawrence wang

squid-users, i hope you can save me once again :) i've been getting a
lot of the errors below. does this look like something i can fix with
reconfiguration or recompilation?

2006/07/04 20:59:42| WARNING: Disk space over limit: 440086904 KB > 432410624 KB
2006/07/04 20:59:53| WARNING: Disk space over limit: 439706788 KB > 432410624 KB
2006/07/04 21:00:04| WARNING: Disk space over limit: 439553980 KB > 432410624 KB
2006/07/04 21:00:15| WARNING: Disk space over limit: 439485096 KB > 432410624 KB
2006/07/04 21:00:26| WARNING: Disk space over limit: 439387548 KB > 432410624 KB
2006/07/04 21:00:37| WARNING: Disk space over limit: 439290212 KB > 432410624 KB
2006/07/04 21:00:48| WARNING: Disk space over limit: 439145952 KB > 432410624 KB
2006/07/04 21:00:59| WARNING: Disk space over limit: 438938676 KB > 432410624 KB
2006/07/04 21:01:10| WARNING: Disk space over limit: 438749888 KB > 432410624 KB
2006/07/04 21:01:21| WARNING: Disk space over limit: 438524564 KB > 432410624 KB
2006/07/04 21:01:32| WARNING: Disk space over limit: 438372576 KB > 432410624 KB
2006/07/04 21:01:43| WARNING: Disk space over limit: 438090032 KB > 432410624 KB
2006/07/04 21:01:54| WARNING: Disk space over limit: 437866428 KB > 432410624 KB
2006/07/04 21:02:05| WARNING: Disk space over limit: 437655232 KB > 432410624 KB
2006/07/04 21:02:16| WARNING: Disk space over limit: 437551136 KB > 432410624 KB
2006/07/04 21:02:27| WARNING: Disk space over limit: 437490324 KB > 432410624 KB


[squid-users] Date and Expires headers not updating?

2006-06-21 Thread lawrence wang

Squid seems to have a bug with Expires and Date headers:

It fetches an object and caches the headers.
The object expires, and Squid fetches it again.
The object is unmodified, so Squid continues to use the cached object.
However, it appears that it also continues to return the old Expires
and Date headers, even though it seems to be using new values "under
the hood".

This will confuse downstream caches, won't it?


[squid-users] Re: What happens when a cache_dir fails?

2006-06-19 Thread lawrence wang

woops, i see that the FAQ has covered this question...

"With Squid-2, you will not lose your existing cache. You can add and
delete cache_dirs without affecting any of the others."

so to be more specific, is this still the case with 2.5STABLE13?

On 6/19/06, lawrence wang <[EMAIL PROTECTED]> wrote:

If I have multiple cache_dirs on separate drives, and one drive fails,
so I edit squid.conf to remove that cache_dir, how are the others
affected? Will I be able to continue using the cached objects in the
other cache_dirs, or do I have to rebuild the cache from scratch?



[squid-users] What happens when a cache_dir fails?

2006-06-19 Thread lawrence wang

If I have multiple cache_dirs on separate drives, and one drive fails,
so I edit squid.conf to remove that cache_dir, how are the others
affected? Will I be able to continue using the cached objects in the
other cache_dirs, or do I have to rebuild the cache from scratch?


[squid-users] Performance worse when handling IMS requests?

2006-06-15 Thread lawrence wang

Hi, is it possible that squid performs worse when handling large
volumes of If-Modified-Since requests instead of normal GET's? I have
two Squid servers running at around 1000 requests per second; the only
difference in the traffic pattern i can discern is that 50% of
requests on one server are IMS requests, which Squid responds to with
304 Not Modified; that server fails a heartbeat test sporadically, but
the other one is fine.

I would think that 304 responses are easier to serve, since Squid just
returns headers -- but i can't find any other difference right now.
Thanks in advance for your help!

Lawrence


[squid-users] repeating squid -z

2006-05-25 Thread lawrence wang

Hello,

Are there any ill (or good) effects of running squid -z on cache
directories which have already been initialized? I'm writing a deploy
script and it's more convenient for me to always run "squid -z", but i
want to make sure this won't clear my cache or anything like that.
Thanks!

Lawrence


[squid-users] HIT on HEAD but MISS on GET?

2006-05-25 Thread lawrence wang

I've noticed that Squid is having trouble caching an object which has
a Set-Cookie header. Here are the object's headers:

Date: Thu, 25 May 2006 15:34:09 GMT
Server: Apache/2.0.54 (Debian GNU/Linux) mod_ssl/2.0.54 OpenSSL/0.9.7e
mod_apreq2-20051231/2.5.7 mod_perl/2.0.2 Perl/v5.8.4
Set-Cookie: NNNSSID=f441217251ca7cc1aa62506590894578; expires=Thu, 25
May 2006 16:34:09 GMT; path=/
Last-Modified: Wed, 24 May 2006 20:49:12 GMT
ETag: "9bd16b-7c80cf-de294200"
Accept-Ranges: bytes
Content-Length: 8159439
Content-Type: text/plain; charset=UTF-8

When I send HEAD requests (using curl), the Squid cache behaves as I
would expect. The first request is a MISS and the headers include the
Set-Cookie line; subsequent requests are HITs and the headers do not
include Set-Cookie.

However, when I do the same thing with GET, the requests always return
MISS and include the Set-Cookie header. Sending a GET request also
makes the next HEAD request return a MISS.

Hopefully this is just a misconfiguration on my end; I swear this used
to work correctly... Any help is appreciated. Thanks in advance.

Lawrence


Re: [squid-users] max number of files in cache?

2006-05-21 Thread lawrence wang

i see. so i can work around this for now by splitting my cache_dir
into multiple ones? it's currently set at 300GB.

On 5/19/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

fre 2006-05-19 klockan 11:03 -0400 skrev lawrence wang:
> today i got a squid crash with this entry in the log:
>
> 2006/05/19 11:38:16| assertion failed: filemap.c:78: "fm->max_n_files
> <= (1 << 24)"
>
> does this mean that squid has a limit on the maximum number of files
> that can be in the cache?

Yes, there is a limit of 2^24 objects per cache_dir.

How large cache_dir do you have? 2^24 files normally translates to
somewhere around 160 GB.


Please file a bug report on this issue. Squid should not crash only
because there is very many objects in the cache.

  http://www.squid-cache.org/bugs/

Regards
Henrik




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)

iD8DBQBEblHiB5pTNio2V7IRAtbmAJ90JCiEDO8kmixd5Hxz4LA4x/sSugCg2f6u
d9J+j22zc0ywdYX3PrWcFr0=
=au6E
-END PGP SIGNATURE-





Re: [squid-users] max number of files in cache?

2006-05-19 Thread lawrence wang

2.5STABLE13, with epoll patch.

On 5/19/06, Mark Elsen <[EMAIL PROTECTED]> wrote:

> today i got a squid crash with this entry in the log:
>
> 2006/05/19 11:38:16| assertion failed: filemap.c:78: "fm->max_n_files
> <= (1 << 24)"
>
> does this mean that squid has a limit on the maximum number of files
> that can be in the cache?
>

 - SQUID version ?

 M.



[squid-users] max number of files in cache?

2006-05-19 Thread lawrence wang

today i got a squid crash with this entry in the log:

2006/05/19 11:38:16| assertion failed: filemap.c:78: "fm->max_n_files
<= (1 << 24)"

does this mean that squid has a limit on the maximum number of files
that can be in the cache?


Re: [squid-users] Squid crashing with "FATAL: xcalloc..." in cache.log

2006-04-21 Thread lawrence wang
that reminds me -- i'm using --enable-dlmalloc, Doug Lea's malloc lib.
could that be the issue?

On 4/21/06, Mark Elsen <[EMAIL PROTECTED]> wrote:
> > I've seen the answer to this in the FAQ. However,
> >
> > 1) I am definitely not running out of swap, and
> > 2) "ulimit -HSd" reports that the max segment size is set to unlimited
> > by default.
> >
> > I am seeing this behavior consistently on a number of boxes when they
> > get to a little over 1GB of resident memory usage. The amount of
> > physical RAM on the boxes ranges from 4-8GB.
> >
> > The boxes are all running Fedora Core 4. I haven't been able to find
> > much documentation on how to find what the max segment size is for a
> > given running process, i.e. whether "unlimited" really means unlimited
> > or whether there might be a hard cap imposed elsewhere. Any pointers?
> > Or are there any known issues with Squid using >1GB of memory?
> >
>
>  - Checkout :
>
>  http://www.squid-cache.org/mail-archive/squid-users/200310/0297.html
>
>  It contains an example C program to check, how much memory you can
>  allocate on your system.
>
> Squid configure also has this option :
>
>--enable-xmalloc-statistics
>   Show malloc statistics in status page
>
>  This may give additional info.
>
>  M.
>


[squid-users] Squid crashing with "FATAL: xcalloc..." in cache.log

2006-04-20 Thread lawrence wang
I've seen the answer to this in the FAQ. However,

1) I am definitely not running out of swap, and
2) "ulimit -HSd" reports that the max segment size is set to unlimited
by default.

I am seeing this behavior consistently on a number of boxes when they
get to a little over 1GB of resident memory usage. The amount of
physical RAM on the boxes ranges from 4-8GB.

The boxes are all running Fedora Core 4. I haven't been able to find
much documentation on how to find what the max segment size is for a
given running process, i.e. whether "unlimited" really means unlimited
or whether there might be a hard cap imposed elsewhere. Any pointers?
Or are there any known issues with Squid using >1GB of memory?


[squid-users] epoll and ENTRY_DEFER_READ messages

2006-04-10 Thread lawrence wang
I've got Squid-2.5.STABLE13 with the epoll patch, and I'm getting a
lot of "clearing ENTRY_DEFER_READ" messages in my cache.log. Is this
something I should be concerned about, or just a debug message at the
wrong verbosity level?


Re: [squid-users] rotate bug?

2006-03-30 Thread lawrence wang
2.5.STABLE12, no patches.

On 3/29/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> mån 2006-03-27 klockan 13:03 -0500 skrev lawrence wang:
> > i get the following in squid_cache.log when i rotate:
> >
> > 2006/03/27 18:01:30| storeDirWriteCleanLogs: Starting...
> > 2006/03/27 18:01:31|   Finished.  Wrote 5793 entries.
> > 2006/03/27 18:01:31|   Took 0.0 seconds (2736419.5 entries/sec).
> > 2006/03/27 18:01:31| logfileRotate: /var/log/cdn/http/squid_store.log
> > 2006/03/27 18:01:31| helperOpenServers: Starting 50 'squirm' processes
>
> There should be a mention of access.log between the store log and
> helperOpenServers..
>
> > even when i set logfile_rotate 0, it still doesn't handle access.log
> > correctly, and keeps writing to the old file after i rename it, until
> > i restart squid. i'm using aufs; does that make a difference?
>
> aufs does not make a difference in this aspect.
>
> Which Squid version?
>
> Any patches applied except for official patches from the "known bugs"
> page?
>
>
> Regards
> Henrik
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.2 (GNU/Linux)
>
> iD8DBQBEKnzW516QwDnMM9sRApAKAJsEcGloUfCrgRQ3HhD1dyeXsBz6TACeIElY
> D4x2NPUPsvd1Np5A0xT+k78=
> =JJ1x
> -END PGP SIGNATURE-
>
>
>


[squid-users] client_persistent_connections kills performance?

2006-03-29 Thread lawrence wang
Hi,
I've been having an issue with high cpu load (70-80%) with traffic
levels of about 150-200 requests per second, average object size 4kb,
99% cache hits.

Today I tried setting "client_persistent_connections off" in
squid.conf, and the average number of open connections dropped, of
course, down from over 2000 to the same as the number of requests per
second.

Cpu load also dropped drastically, to under 10%. This is certainly the
performance improvement I was looking for; however, I'm trying to
understand why this is so drastic. I'd like to leave keepalive on for
when clients use it more (currently, most of the traffic is a single
request per client, but this will change in the future). Does this
have to do with the time spent in select()? Thanks in advance.


Re: [squid-users] rotate bug?

2006-03-27 Thread lawrence wang
i get the following in squid_cache.log when i rotate:

2006/03/27 18:01:30| storeDirWriteCleanLogs: Starting...
2006/03/27 18:01:31|   Finished.  Wrote 5793 entries.
2006/03/27 18:01:31|   Took 0.0 seconds (2736419.5 entries/sec).
2006/03/27 18:01:31| logfileRotate: /var/log/cdn/http/squid_store.log
2006/03/27 18:01:31| helperOpenServers: Starting 50 'squirm' processes

even when i set logfile_rotate 0, it still doesn't handle access.log
correctly, and keeps writing to the old file after i rename it, until
i restart squid. i'm using aufs; does that make a difference?

On 3/25/06, Mark Elsen <[EMAIL PROTECTED]> wrote:
> > hi, i was testing out squid -k rotate on squid-2.5STABLE12, and i
> > notice that cache.log and store.log rotate ok (*.0 files are created),
> > but access.log doesn't; furthermore, if i restart the server,
> > access.log is emptied, so i lose my old logs.
> >
> > if i rename the file after running rotate, it will keep writing to
> > that one and then write to a new file after restart.
> >
> > my access.log is named "squid_access.log" and it's in a non-standard
> > location; maybe that's why?
> >
>
> It can perfectly be in a none standard location , if so configured and or
> you are using log pointer directives in squid.conf.
>
> Is there anything in cache.log , when yoy try to rotate ?
> Look both in the end of the before-rotated cache.log (tail-ed) and in
> in the beginning
> of the new (head-ed) one.
>
> M.
>


Re: [squid-users] purging variants

2006-03-27 Thread lawrence wang
thanks for the clarification. looks like i have some hacking to do.
so, are these squid object headers documented, like at what offset i
should look to see if an object is a variant, or should i just dive
into the code?

On 3/18/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> lör 2006-03-18 klockan 08:41 -0500 skrev lawrence wang:
> > I see. But maybe I've phrased this wrong... It seems like when the
> > purge tool runs, it does find all the different variants for a given
> > URL and runs requests against each of them; of course the variants
> > which require specific headers return 404's when those are not found
> > in the request. Perhaps there's a way to relax this check without
> > breaking anything else?
>
> The PURGE tool must send the correct headers, or Squid won't know what
> to do. It's not a check, it's how things work. Squid-2.5 does not know
> what variants there is for a given URL, no more than it knows what URLs
> there is in the cache. All it knows to do is "OK, I now have these
> request headers and URL given to me from the client, is there a matching
> object". Without the headers it can not find or know the variant.
> Without the headers all Squid-2.5 find is that the object varies, but
> have no means of finding the variants. Note of warning: If you PURGE
> without the variant headers then Squid-2.5 forgets that the object
> varies and the remaining cached variants of the object can not be
> reached until Squid has again learned that the object varies by seeing a
> Vary header response from the server. This means that if you purge
> without the headers then there is no longer any way to purge the
> variants without first making a request (which will be a cache miss) for
> the URL.
>
> The PURGE tool could be modified to do this I suppose. Only needs to be
> taught the vary algorithm used by 2.5 and decode this into suitable
> request headers as part of the purge request sent to Squid. The required
> information for reconstructing the required headers is found in a meta
> TLV header of the object and is read by the purge tool. Only that it
> does not know the meaning of this information and consequently does not
> make use of it today.
>
>
> Regards
> Henrik
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.2 (GNU/Linux)
>
> iD8DBQBEHCTK516QwDnMM9sRAvEUAJ0e/Qty3lLQXcoDUkumUv91cq2o+gCeLJLZ
> QyDpTTVIcvDI9qNs7D++Tq4=
> =jdRd
> -END PGP SIGNATURE-
>
>
>


[squid-users] rotate bug?

2006-03-25 Thread lawrence wang
hi, i was testing out squid -k rotate on squid-2.5STABLE12, and i
notice that cache.log and store.log rotate ok (*.0 files are created),
but access.log doesn't; furthermore, if i restart the server,
access.log is emptied, so i lose my old logs.

if i rename the file after running rotate, it will keep writing to
that one and then write to a new file after restart.

my access.log is named "squid_access.log" and it's in a non-standard
location; maybe that's why?

--lawrence


Re: [squid-users] how to take advantage of multiple CPU's?

2006-03-22 Thread lawrence wang
it seems like Squid can't help but be CPU limited when it's serving
very small objects (<1KB) from memory, which is my situation.

On 3/21/06, Chris Robertson <[EMAIL PROTECTED]> wrote:
> lawrence wang wrote:
>
> >Is there a way to have Squid 2.5STABLE12 take advantage of multiple
> >CPU's? Thanks in advance for any advice or suggestions.
> >
> >
> Run multiple instances...
>
> http://squidwiki.kinkie.it/MultipleInstances
>
> Determining why your Squid is CPU limited (assuming that is the reason
> for this question) is quite often a better course of action.
>
> Chris
>


[squid-users] how to take advantage of multiple CPU's?

2006-03-21 Thread lawrence wang
Is there a way to have Squid 2.5STABLE12 take advantage of multiple
CPU's? Thanks in advance for any advice or suggestions.


Re: [squid-users] purging variants

2006-03-18 Thread lawrence wang
I see. But maybe I've phrased this wrong... It seems like when the
purge tool runs, it does find all the different variants for a given
URL and runs requests against each of them; of course the variants
which require specific headers return 404's when those are not found
in the request. Perhaps there's a way to relax this check without
breaking anything else?

On 3/18/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> fre 2006-03-17 klockan 17:11 -0500 skrev lawrence wang:
> > I was wondering,
> > since this is a significant hassle, if anyone's written a patch that
> > makes Squid purge all variants under a given URL,
>
> Problem is that Squid-2.5 does not know the URLs of objects. If it knew
> it would do it.
>
> The PURGE tool could be modified to do this I suppose. Only needs to be
> taught the vary algorithm used by 2.5 and decode this into suitable
> request headers as part of the purge. The required information is found
> in a meta TLV header of the object.
>
> Regards
> Henrik
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.2 (GNU/Linux)
>
> iD8DBQBEG9dY516QwDnMM9sRAjdnAJwNltMaGlowF39iOFmx4XdXk058rwCeMs04
> BFZR7HoLn+cnf3/Cc58fHmk=
> =U1ti
> -END PGP SIGNATURE-
>
>
>


[squid-users] purging variants

2006-03-18 Thread lawrence wang
I've seen a few posts explaining that Squid 2.5's Vary: support
doesn't work so well with PURGE, since it requires that you send the
exact headers along with the URL for that variant. I was wondering,
since this is a significant hassle, if anyone's written a patch that
makes Squid purge all variants under a given URL, something that would
then be usable with the existing third-party purge tool. And if not,
can anyone point me in the general direction of the code I might want
to start digging into to roll my own patch? Thanks in advance.
--Lawrence Wang