At 02:27 AM 4/20/2005, Denis Vlasenko wrote:
Wow. How come my intranet does not suffer from this at all despite lotsa
folks with various dowload managers which (mis)use partial download?\
Partial downloads aren't the issue. Downloaders that use subranges
on very large non-cacheable files are
Matus UHLAR - fantomas wrote:
Only for your local machine. And I do not see any reason to use ICP then
On 19.04 14:44, sasa wrote:
..therefore you say to use in squid.conf only:
http_port 127.0.0.1:3128
..only this ??
yes, unless you want to use this squid as a sibling for other
How do we assign url_regex which will allow only
specified sites in the
url_regex; and only to effect particular machine (IP
address)
For managing sites then you have to use dstdomain or
dstdom_regex type of acl (not Acl).
Acl user1 src 192.168.100.3
Acl sites url_regex .redhat.com
Thanks, it works. A bug in the debian package? 'other'
doesn't have the rights rx on the directory while the
pipe rights are ok.
Another challenge : how can I set an acl with members
of a NT group? In this case, it will be users in the
group DOMAIN/Internet.
--- Henrik Nordstrom [EMAIL PROTECTED]
Thank you for the tip
I achieved this by doing like this.
/etc/squid/squid.conf:
Acl user1 src 192.168.100.3/32
Acl user1_sites dstdomain url_regex -i /etc/squid/sites
http_access deny user1 !user1_sites
http_access allow user1
/etc/squid/sites:
..redhat.com
..sun.com
..java.com
It working
We have customized our error messages:
Site Unavailable
You have attempted to access a site that is not blocked but it is unreachable.
Check the spelling of the web site
Check the request syntax, including
Hi,
we've put the cachemgr.cgi on a Microsoft IIS 6.0 server which also runs
Squid for NT.
By allowing All unknown CGI extensions and putting cachemgr.cgi as
default homepage we are able to browse to this site.
When we logon however we get a CGI error: The specified CGI application
misbehaved
Hi,
We are evaluating Squid for NT and we are noticing a lot of TCP_MISS and
no TCP_HITs in our logs. We see TCP_IMS_HIT and TCP_REFRESH_MISS too.
I think nothing gets used from the cache.
I've added our configfile (used a script that only shows the lines
without a # in front of it), so the rest
Hi all,
I hav also got same problem .
Regards
Dev
On 4/21/05, Jeroen DEMETS - SAVACO [EMAIL PROTECTED] wrote:
Hi,
We are evaluating Squid for NT and we are noticing a lot of TCP_MISS and
no TCP_HITs in our logs. We see TCP_IMS_HIT and TCP_REFRESH_MISS too.
I think nothing gets
I am having a daily occurance where my squid stops responding and I have to
reboot the box.
If I try to stop the squid process I end up with 1 defunct process that I can
not kill.
[EMAIL PROTECTED] proc]# ps -ef | grep squid
squid21502 1 19 07:03 ?00:13:46 [squid defunct]
We have customized our error messages:
Site Unavailable
What's in access.log ,then,for the particular request ?
Anything further in cache.log ?
I am having a daily occurance where my squid stops responding
and I have to reboot the box.
Anything in cache.log when squid stops responding ?
Squid version ?
Os/platform/version ?
If I try to stop the squid process I end up with 1 defunct
process that I can not kill.
[EMAIL
Hi,
We are evaluating Squid for NT and we are noticing a lot of
TCP_MISS and
no TCP_HITs in our logs. We see TCP_IMS_HIT and TCP_REFRESH_MISS too.
I think nothing gets used from the cache.
If you are evaluating using browser refresh options,
then that's normal.
I've added our
The cache log shows a bunch of errors I had not seen until now.
2005/04/21 07:54:36| ctx: exit level 0
2005/04/21 07:54:36| WARNING: unparseable HTTP header field {HTTP/1.1 200 OK}
2005/04/21 07:55:10| sslReadServer: FD 147: read failure: (104) Connection
reset by peer
2005/04/21 07:55:10|
Linux proxy 2.4.21-20.ELsmp #1 SMP Wed Aug 18 20:46:40 EDT 2004 i686 i686 i386
GNU/Linux
squid-2.5.STABLE9
Built with:
[EMAIL PROTECTED] -march=i686 -funroll-loops -DNUMTHREADS=128 -DUNIX
-D_REENTRANT -D_REENTRANT -D_REENTRANT%g
[EMAIL PROTECTED] -march=i686 -funroll-loops%g
# ./configure
You can't , but remote webservers can, by providing cacheable
objects.
Check with :
http://www.ircache.net/cgi-bin/cacheability.py
M.
Hi,
Tnx for you help.
I checked some sites and that site shows validated but anyhow, we are
surfing with a testgroup of 10 persons and we got no TCP_HIT
The cache log shows a bunch of errors I had not seen until now.
I saw already one FATAL error (sigsegv).
Normally you are supposed to file a bug report for such issues
as the squid-cache.org site.
However, I also suggest , you build a more standard binary,
without the
I will do that. I started trying to use the flags we use for all our apache
builds.
I have noticed some issues since we started using them.
Thanks.
-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 21, 2005 9:20 AM
To: Lewars, Mitchell (EM, PTL)
Cc:
I rebuilt with no FLAGS, I am getting the following:
2005/04/21 09:22:42| storeLateRelease: released 72 objects
2005/04/21 09:22:57| squidaio_queue_request: WARNING - Queue congestion
2005/04/21 09:23:01| CACHEMGR: unknown@3.144.232.23 requesting 'client_list'
2005/04/21 09:23:14| sslReadServer:
I rebuilt with no FLAGS, I am getting the following:
The warning stuff concerning header parsing can be ignored.
Conn. reset is fairly normal if remote servers abort the
connection. I see it alot too.
Queue congestion is also a bit normal after a recent restart
probably due to
No fatal errors.
I have added debugging, I will monitor it.
Thanks again.
Mitch
-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 21, 2005 9:44 AM
To: Lewars, Mitchell (EM, PTL)
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid will not
The following entries appeared in the parent cache's cache.log file.
2005/04/21 03:09:20| NETDB state saved; 0 entries, 14 msec
2005/04/21 03:11:09| Failure Ratio at 1.09
2005/04/21 03:11:09| Going into hit-only-mode for 5 minutes...
2005/04/21 03:16:13| Failure
Finally, it works. The problem was the debian package
in sarge which still provides wb_group instead of
wbinfo_group.pl. I had to download the source tarball
and copied the famous perl script into the right
directory and modified the squid configuration.
For you all, thanks for your help.
The following entries appeared in the parent cache's cache.log file.
2005/04/21 03:09:20| NETDB state saved; 0 entries, 14 msec
2005/04/21 03:11:09| Failure Ratio at 1.09
2005/04/21 03:11:09| Going into hit-only-mode for 5 minutes...
2005/04/21 03:16:13| Failure
Hi
As the linuxbox access the Win2K box running ADS for
getting the user authenticated using squid_ldap_auth,
when the machine running DNS goes down linuxbox is
not able to reach the Win2K box running ADS I suppose.
As soon as the DNS system came back to life
authentication is working fine.
On Thu, 21 Apr 2005, Elsen Marc wrote:
The following entries appeared in the parent cache's cache.log file.
2005/04/21 03:09:20| NETDB state saved; 0 entries, 14 msec
2005/04/21 03:11:09| Failure Ratio at 1.09
2005/04/21 03:11:09| Going into hit-only-mode for 5 minutes...
That's because the authenticators need to verify WHERE to authenticate via
dns...so the look for the special SRV entries that AD has for _kerberos and
_ldap, if he can't get a response on those entries, then it is assumed that
they are unreachable.
On Thursday 21 April 2005 10:21 am, Babs
Hi,
I'm using squid. The problem is that when my cache
reach to 50% full it try to run unlinke.in this case
it takes 3 min and got 99% of CPU. after a while it
crashed.
I don'nt know if there is a misconfiguration or bug
TNX for your notic
here is my configuration:
CPU: Intel(R) Pentium(R) 4
A common problem that will cause this is configureing your cache size to be
the same as available HD space, but good practice says you should leave more
space than is needed available. And cache should be on it's own partition,
so that it's not competing with anything like /var/log and /tmp
hi
squids creates file named swap.state in cache directory of squid, after
couple of days file growing up to ex. 22MB. after this squid creates
another file: swap.state.clean, and try do something with this file.
result of this is swap.state.clean about 20MB and squid hangup. still
visible in
On Thu, 21 Apr 2005, Nirina Michel wrote:
Thanks, it works. A bug in the debian package? 'other'
doesn't have the rights rx on the directory while the
pipe rights are ok.
The access to this pipe is restricted by default for security reasons, and
the correct method for providing access to this
On Thu, 21 Apr 2005, Merton Campbell Crockett wrote:
2005/04/21 03:11:09| Failure Ratio at 1.09
2005/04/21 03:11:09| Going into hit-only-mode for 5 minutes...
There was, however, an unusually high occurance of 50x status recorded in
the access log, in particular, 503 and 504
On Thu, 21 Apr 2005, Babs wrote:
Hi
As the linuxbox access the Win2K box running ADS for
getting the user authenticated using squid_ldap_auth,
when the machine running DNS goes down linuxbox is
not able to reach the Win2K box running ADS I suppose.
As soon as the DNS system came back to life
On Thu, 21 Apr 2005, kamil kapturkiewicz wrote:
squids creates file named swap.state in cache directory of squid, after
couple of days file growing up to ex. 22MB. after this squid creates another
file: swap.state.clean, and try do something with this file.
swap.state grows until you run squid
Matus UHLAR - fantomas wrote:
yes, unless you want to use this squid as a sibling for other squid
..I have tried with in squid.conf:
http_port 127.0.0.1:3128
and in httpd.conf:
listen 127.0.0.1:80
..but when on the firewall/proxy box in the browser I use:
I am a fresh man to ESI, would you please tell me that whether I can use ESI
with
website using gzip. if can , please tell me how to do it?
thank you very much
Regards
Jacky
_
Dont just search. Find. Check out the new MSN Search!
36 matches
Mail list logo