Mark Elsen wrote:
Is there a way to configure Squid 2 to always first try fetching through
the (only) parent, and if that fails, then go direct?
Here's what I have now, and this fails miserably if x.x.x.x is down:
cache_peer x.x.x.x parent3128 3130 no-query default
never_direct
Hi,
Any help greatly appreciated. The problem is as follows (I'm sorry
about the length of the message):
If we run squid without WCCP enabled (put the proxy's IP directly into
the browser) it works just fine. It proxies and caches without
problems and it writes it all to the log/cache files.
Hi Squid users,
I have a problem and i hope someone knows the answer.
- I use squid 2.5 Stable 5 on a Redhat Linux Enterprise 4 Server.
- I configured it as reverse proxy with bind 9 DNS Service on the same
system.
- This proxy sits in a our DMZ and is accessible from outside via port
80 and
On Thu, 2005-12-08 at 16:16 -0700, Laurie wrote:
On Wed, 2005-12-07 at 21:51 +0100, Matus UHLAR - fantomas wrote:
Maybe you encountererd this bug:
Document that tcp_outgoing_xxx works badly in combination with
server_persistent_connections
On Monday 12 December 2005 23:08, Chris Robertson wrote:
Hi Chris,
Hmmm... Very odd. I have set up something similar, (one src acl
specifying an IP address and a file), and it works for me (though that is
using 2.5STABLE7). What does the debugging output look like?
looks like this:
On Friday 16 December 2005 11:03, Marc-Christian Petersen wrote:
Hi again,
Hmmm... Very odd. I have set up something similar, (one src acl
specifying an IP address and a file), and it works for me (though that is
using 2.5STABLE7). What does the debugging output look like?
looks like
Ray La Peyre wrote:
Hi all
I have setup squid on a Red Hat 9 server using ldap authentication which
is running successfully. I would like to know if there are any
applications that can give a report on who is on the proxy at the moment
is there a way to do this? I have installed squint which
hi there,
i got a little problem with squid to squid connect
i try to setup a cache_peer proxy but i can only one
website access, from the rest i get 111 - connection refused
(there are no acl on the peer proxy who blocks it)
squid.conf:
http_port 80
cache_peer 77.77.77.77parent80 0
hi,
I'm sorry I can't post the logs, here's my resume:
with squid.conf:
request_body_max_size 0 KB
I get a small file (~ 1596 Byte) uploaded
But a bigger file (~ 665879 Byte) gives a
Error: Document is empty
-In access.log: ... POST 502 ...
--ok, then
with squid.conf:
request_body_max_size
Hello All!
I have installed squid on my Pc, it working fine. now
I want to implement ACL using PC's NIC Mac address
authentication. only these users can access internet
whom I have accessed as a allow and all other users
deny internet access.
I hope someone help me in this regards
Thanks in
Mark Elsen wrote:
Is there a way to configure Squid 2 to always first try fetching through
the (only) parent, and if that fails, then go direct?
Here's what I have now, and this fails miserably if x.x.x.x is down:
cache_peer x.x.x.x parent3128 3130 no-query default
never_direct
Hi all;
Please let me know if this is the right forum.
I have a client, who we setup a SUSE Enterprise SQUID 2.5-STABLE-10 setup
for 1800 users to 5000 users. All is working OK, except for one site,
http://clients.thefocalpoint.com/restricted/AVG/index.htm this site use NTLM
basic
On Wed, 2005-12-14 at 13:44 -0500, Carmelo A. Zizza wrote:
Hi all;
Please let me know if this is the right forum.
I have a client, who we setup a SUSE Enterprise SQUID 2.5-STABLE-10 setup
for 1800 users to 5000 users. All is working OK, except for one site,
I made no changes to my squidGuard config file, but yesterday it just
stopped working. My logrotate sent me an email saying it couldnt contact
the PID for squid and sure enough, it had crashed. The logs said it was
because the squidGuard processes were crashing too rapidly. I checked the
logs
I was able to get it working by changing the line to:
weekly smtwhfa 06:00 - 21:00
It's still weird to see this though and I hope someone out there can let me
in on why it did this.
Brian
-Original Message-
From: Brian Phillips [mailto:[EMAIL PROTECTED]
Sent: Friday, December 16, 2005
I made no changes to my squidGuard config file, but yesterday it just
stopped working. My logrotate sent me an email saying it couldn't contact
the PID for squid and sure enough, it had crashed. The logs said it was
because the squidGuard processes were crashing too rapidly. I checked the
Hello All!
I have installed squid on my Pc, it working fine. now
I want to implement ACL using PC's NIC Mac address
authentication. only these users can access internet
whom I have accessed as a allow and all other users
deny internet access.
I hope someone help me in this regards
Hello,
I've got a recurring problem with squid. I'm running it on a freebsd6
box installed from the latest port. Approximately every 3 to 4 days internet
access slows to a crawl, and on the squid box squid processes are up in cpu
time. Additionally whenever a lan machine requests a page
...and still have squid be online ?
I would think so.
M.
Hello All!
I have installed squid on my Pc, it working fine. now
I want to implement ACL using PC's NIC Mac address
authentication. only these users can access internet
whom I have accessed as a allow and all other users
deny internet access.
Do not rely on the MAC address too much,
Hello,
Thanks for the suggestion. I checked out the faq and /dev/null does not
appear to be the issue. Memory, this box has 1gb of ram, and during these
high cpu periods i do not detect swapping. I'm suspecting that since the
cache is full squid is dumping the oldest items and taking a while
- Original Message - From: Mark Elsen
[EMAIL PROTECTED] To: Dave [EMAIL PROTECTED] Cc:
squid-users@squid-cache.org Sent: Friday, December 16, 2005 11:59
AM Subject: Re: [squid-users] squid cache delay?
Hello, I've got a recurring problem with squid. I'm running it on
a freebsd6
List,
What is the description of the --enable-ntlm-fail-open set at
./configure?
The text displayed on the ./configure --help is quite vague...
I've searched through internet and haven't found any in-depth
description...
Anyways, sometimes I get this error on my cache.log:
Richard Lyons napisał(a):
gcc-3.4 is known to generate incorrect code when optimising on x86_64
(http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21804), it may be the
cause of the segfault. If removing the optimisations doesn't help
you might try gcc-4.
Rick.
OK it wasn't caused by
On Friday 16 December 2005 18:01, Michał Margula wrote:
OK it wasn't caused by optimisations. I removed all flags and left only
-march=nocona. It seems that thiis is a squid bug, bug on squids
bugzilla they ask me to run malloc debuger which I can't do. I am afraid
it is good time to go back
25 matches
Mail list logo