* Henrik Nordstrom [EMAIL PROTECTED]:
On fre, 2007-07-27 at 10:33 +0200, Ralf Hildebrandt wrote:
The debian/unstable squid3 packages are configured using:
...
--enable-storeio=ufs,aufs,coss \
--enable-diskio=AIO,Blocking,DiskDaemon,DiskThreads \
...
What exactly does
As far as I can tell, PURGE requests via HTTP/1.1 works fine. But
I've read some documentation that implies that this was not always
the case. The release notes don't seem say. Anyone know which Squid
version started supporting PURGE via HTTP/1.1 ?
Ric
On Jul 29, 2007, at 3:02 PM, Michael Pye wrote:
Ricardo Newbery wrote:
I'm pretty sure entries are cached (and are purgeable) based on
the URL
coming into squid, not whatever squid rewrites it to.
No - they are cached as it states based on what URLs are rewritten to.
You have to be
Dear,
is it possible ? ex : i want to set a delay pool for limit the bw if
someone download file bigger than 20Mb. and if lower than 20Mb it`ll be
no delay pool. i know how to use delay pool but i dont know how to set
ACL to lock the max size . any clue ?
Hi All,
I installed squid on Debian Etch, and I although the ACL rules allow
localhost, I still get an Access Denied message with a transparent
setup.
My squid.conf is:
---
# grep -v '^#\|^$' squid.conf
http_port 3128 transparent
Dear,
is it possible ? ex : i want to set a delay pool for limit the bw if
someone download file bigger than 20Mb. and if lower than 20Mb it`ll be
no delay pool. i know how to use delay pool but i dont know how to set
ACL to lock the max size . any clue ?
Under How come some objects do not get cached?
http://wiki.squid-cache.org/SquidFaq/InnerWorkings#head-
aed2acb07aed79ef1f7a590447b6a45a8dd8e7d1
we read:
Responses for requests with an Authorization header are cachable
ONLY if the reponse includes Cache-Control: Public.
Responses
On 30/07/2007 at 12:38, Kris [EMAIL PROTECTED] wrote:
Dear,
is it possible ? ex : i want to set a delay pool for limit the bw if
someone download file bigger than 20Mb. and if lower than 20Mb it`ll be
no delay pool. i know how to use delay pool but i dont know how to set
ACL to lock the
On mån, 2007-07-30 at 11:14 +0200, [EMAIL PROTECTED] wrote:
auth_param digest program c:/squid/libexec/digest_ldap_auth.exe -A
description -b DC=aude,DC=com -D
Cn=administrateur,OU=Users,DC=aude,DC=com -w toto -F
sAMAccountName=%s -h 192.1.1.1
realm AUDE
and by exemple a user squid and
On mån, 2007-07-30 at 01:16 -0700, Ricardo Newbery wrote:
As far as I can tell, PURGE requests via HTTP/1.1 works fine. But
I've read some documentation that implies that this was not always
the case. The release notes don't seem say. Anyone know which Squid
version started supporting
On mån, 2007-07-30 at 01:24 -0700, Ricardo Newbery wrote:
Purges work with the incoming url. Purges don't work with the
outgoing url (or actually, the outgoing *path* fed into the squid
host:port).
purge requests is also subject to the url_rewriter before they are
processed.
Regards
On mån, 2007-07-30 at 14:46 +0300, GoogleGuy wrote:
The weird thing is, if I manually configure Firefox to access the Web
via localhost:3128, it works fine, no matter whether I use the
transparent keyword or not. The ACL rule that allows localhost is
in effect in this case, since if I change
On mån, 2007-07-30 at 05:04 -0700, Ricardo Newbery wrote:
Under How come some objects do not get cached?
http://wiki.squid-cache.org/SquidFaq/InnerWorkings#head-
aed2acb07aed79ef1f7a590447b6a45a8dd8e7d1
we read:
Responses for requests with an Authorization header are cachable
On tor, 2007-07-26 at 21:47 -0700, Reid wrote:
I've only been running this squid server for about a week so I doubt this is
an unusual problem.
Can anyone help??
More observations:
1- I suspect that this problem began when someone using my proxy was doing a
click on a bunch of
sites
On Mon, 30 Jul 2007 15:56:11 +0200
Henrik Nordstrom [EMAIL PROTECTED] wrote:
The weird thing is, if I manually configure Firefox to access the
Web via localhost:3128, it works fine, no matter whether I use the
transparent keyword or not. The ACL rule that allows localhost is
in effect in
There's at least one site that doesn't play well with the (presumably)
pipeline_prefetch setting:
http://www.imdb.com/title/tt0091635/
gives:
404 - Sorry, no prefetching
This effectively makes pipeline_prefetch unusable as a common strategy.
Why would they want to do that? Any negative impacts
On mån, 2007-07-30 at 17:29 +0300, GoogleGuy wrote:
Thanks for your suggestion, but like I said, still no luck.
access.log sample when trying to access google.com:
1185804381.874 0 192.144.46.78 TCP_DENIED/403 1450 GET
http://www.google.com/ - NONE/- text/html
1185804381.950 92
On Mon, 30 Jul 2007 17:09:30 +0200
Henrik Nordstrom [EMAIL PROTECTED] wrote:
You need to allow Squid to go out without getting redirected back on
itself..
You mean with iptables or can I set this up with Squid's ACL?
Andrei
On mån, 2007-07-30 at 20:17 +0300, GoogleGuy wrote:
On Mon, 30 Jul 2007 17:09:30 +0200
Henrik Nordstrom [EMAIL PROTECTED] wrote:
You need to allow Squid to go out without getting redirected back on
itself..
You mean with iptables or can I set this up with Squid's ACL?
It's mainly
On Mon, 30 Jul 2007 19:41:27 +0200
Henrik Nordstrom [EMAIL PROTECTED] wrote:
You mean with iptables or can I set this up with Squid's ACL?
It's mainly iptables, using the owner match..
This really should be on the Wiki pages or in the docs! Doesn't anyone
use Squid as a personal proxy
Juraj Sakala wrote:
Thanks once again for sharing light on this. Do you have any examples
where I can use req_header to detect if my clients have their own proxy
servers?
Any clue, web links or posts will highly be appreciated.
Also is req_header the only option whereby we can detect child
On mån, 2007-07-30 at 21:00 +0300, GoogleGuy wrote:
This really should be on the Wiki pages or in the docs! Doesn't anyone
use Squid as a personal proxy from localhost?
Not many.. and even less as a transparently intercepting proxy for
personal use..
Most is running Squid on a
On mån, 2007-07-30 at 23:55 +0545, Tek Bahadur Limbu wrote:
I have applied your techniques you describe above. I still have problems
detecting my child proxies. In a layman view, how to I see them in the
first place.
If you are lucky the child proxies add X-Forwarded-For or Via headers,
GoogleGuy escreveu:
This really should be on the Wiki pages or in the docs! Doesn't anyone
use Squid as a personal proxy from localhost?
I really think doing that (using squid as personal proxy on
localhost) is FAR from being a smart idea. The bandwidth saving with a
single client
I looked all over for a way to do this, but I was unable to find one.
If a user attempts to go to a domain that does not exist, the customer
squid error page is returned. What I would like to do is have the
browser act as it would if there was no proxy at all. I am running
squid 2.6STABLE5 on a
On Jul 30, 2007, at 6:52 AM, Henrik Nordstrom wrote:
On mån, 2007-07-30 at 01:16 -0700, Ricardo Newbery wrote:
As far as I can tell, PURGE requests via HTTP/1.1 works fine. But
I've read some documentation that implies that this was not always
the case. The release notes don't seem say.
On mån, 2007-07-30 at 13:30 -0700, Ricardo Newbery wrote:
Regarding PURGE of Vary objects. The solution is just not to include
a Vary in the PURGE, correct? So all variants are purged.
No, currently PURGE of variants is a bit broken.. you can purge
individual variants by including the
On Jul 30, 2007, at 1:55 PM, Henrik Nordstrom wrote:
On mån, 2007-07-30 at 13:30 -0700, Ricardo Newbery wrote:
Regarding PURGE of Vary objects. The solution is just not to include
a Vary in the PURGE, correct? So all variants are purged.
No, currently PURGE of variants is a bit broken..
On Jul 30, 2007, at 2:07 PM, Ricardo Newbery wrote:
On Jul 30, 2007, at 1:55 PM, Henrik Nordstrom wrote:
On mån, 2007-07-30 at 13:30 -0700, Ricardo Newbery wrote:
Regarding PURGE of Vary objects. The solution is just not to
include
a Vary in the PURGE, correct? So all variants are
On mån, 2007-07-30 at 14:07 -0700, Ricardo Newbery wrote:
But this seems orthogonal to the HTTP/1.0 issue. Purging instead via
HTTP/1.0 still doesn't purge all variants, correct?
Correct. Squid has always ignored the HTTP version of the request, as it
should when being a HTTP/1.0 proxy...
On mån, 2007-07-30 at 14:28 -0700, Ricardo Newbery wrote:
Also... does this mean that purging of gzipped content (Vary: Accept-
Encoding) is also broken?
Yes, most of the time..
There is an open bug with a preleminary patch for this however.
Regards
Henrik
signature.asc
Description: This
On mån, 2007-07-30 at 16:19 -0400, Chase Putans wrote:
I looked all over for a way to do this, but I was unable to find one.
If a user attempts to go to a domain that does not exist, the customer
squid error page is returned. What I would like to do is have the
browser act as it would if
32 matches
Mail list logo