Amos Jeffries wrote:
Hi

I installed Squid 2.6 on Centos 5.1 X86_64 system about a week ago - and
it worked fine for the first few days.
I have set all clients to use the Squid Proxy for all external (non
private 192.168.0.0/24) ip addresses. The only squid config settings I
changed from default were  ACL changes to allow proxy access to everyone
on the local network.

I now have the following situation on this client:
1) I can browse local addresses fine (as they are direct)
2) I can browse a few non local addresses fine. I can refresh my ISPs
usage data OK for example, and it is clearly refreshing the live data
via squid.
3) If I browse most arbitrary web addresses - the firefox tab hangs
indefinitely with the little circular animation on the tab.
4) If I revert to direct access (Non proxy) - everything works fine.
5) I have deleted the entire cache - and maybe that helped for a bit -
but the problem returned very soon after.
6) I have checked CPU and memory usage on the centos machine - and
everything looks fine - almost nothing happening.
7) I did make some router changes to try to prevent direct access from
clients - but I have since reverted these changes because the router did
not behave as expected. It is now back to the starting point - but the
problem persists.
8) I have recently installed sarg, Calamaris and Webalizer - but I doubt
these could be responsible for the problem.

Can anyone suggest what might be going on here, and if so - how to fix it?
If not - can anyone advise diagnostic steps?

It sounds like you are hitting one of the interception catch-22s. Only you
don't mention interception.
Do you have any FW entries specifically for the proxy box?

What exactly do your ACL and access lines look like now?

Thanks Amos. Interestingly - whatever the problem was seems to time out after several hours. All is working OK this morning. I don't intentionally have any "interceptions".

I did try to set up firewall rules for the proxy box - but my firewall/router is a Netgear DG834G - and there seems to be something wrong with its outgoing rules implementation. Specifically - I set up the following rules - in order.

always allow any port outgoing from proxy IP.
always disallow any port outgoing from all IPs.

When I set this up - I had very erratic behaviour. Some web pages came up slowly - and some not at all. There were also problems with fetchmail (running on the same box as squid) downlaoding mail. I attributed this to a problem in the router. When I removed these rules - things reverted to normal - but then a bit later - I had this apparent proxy problem.

Given the fact that the problem appears to time out after several hours - I am wondering if there is a DNS issue. I have seen some references to SQUID caching DNS info - but I don't know much about it. If there was a temporary DNS problem at some time - would squid (or something else) cache the DNS "miss" - and continue returning the "miss" after the problem was resolved?

While the problem was occurring - I did test the DNS server (bind) running on the SQUID box - and it was able to resolve the addresses which were failing via squid. If there is a DNS problem - I don't think it is in the bind server.

Thanks again for your advice.

Richard.



Amos


Reply via email to