Ray La Peyre wrote:
Hi all
I have setup squid on a Red Hat 9 server using ldap authentication which
is running successfully. I would like to know if there are any
applications that can give a report on who is on the proxy at the moment
is there a way to do this? I have installed squint which
and it works if I restart squid each time I edit the file. Is there a way
to have squid re-read the file on a regular interval without starting and
stopping squid?
With this command squid will re-read the configuration without actually
stopping/starting.
/path/to/squid/sbin/squid -k
and it works if I restart squid each time I edit the file. Is there a way
to have squid re-read the file on a regular interval without starting and
stopping squid?
With this command squid will re-read the configuration without actually
stopping/starting.
/path/to/squid/sbin/squid -k
--Original Mail--
From: Sturgis, Grant [EMAIL PROTECTED]
I didn't recompile squid from source, but simply used the FC3 RPM. Does
anyone know if that package is compiled with the
--enable-linux-netfilter and --enable-wccp options? Can you tell how to
check?
---
Did you compile
are looking to accomplish (control access/gauge
utilization).
Regards,
Scott Phalen
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.11 - Release Date: 5/16/2005
On Wed, 2005-03-02 at 12:41 +0100, Henrik Nordstrom wrote:
If someone with RedHat AS 3 figures out what they have done to their Linux
kernel this time, and a good way of detecting it I am happy to update the
ip_wccp module accordingly.
Regards
Henrik
Thanks! I wish I knew a bit more
the archives and google but can't find any solutions for
this. I tried using the latest ip_wccp.c file from squid-cache.org and a
file I used successfully on AS 2.1 a year ago. Any guidance is greatly
appreciated!
Scott Phalen
http://arbornet.org/~eibwen/squid.conf
http://arbornet.org/~eibwen/squid.conf.rules
~~~
In your squid.conf file
acl all src 0.0.0.0/0.0.0.0
http_access deny all
Change this to allow and your squid should work fine.
Regards,
Scott
Can anyone suggest the best logfile analyzer for this,
or squid configuration options that will make
access.log more appropriate for this?
A free solution would be Calamaris
(http://cord.de/tools/squid/calamaris/Welcome.html.en)
Can anyone suggest the best logfile analyzer for this,
or squid configuration options that will make
access.log more appropriate for this?
A free solution would be Calamaris
(http://cord.de/tools/squid/calamaris/Welcome.html.en)
Is it possible to route email through squid? I am seeing some weird
activity in my access.log that looks like spammers are forwarding mail
through my cache:
1104271801.943 5873 205.209.140.20 TCP_MISS/200 446 CONNECT
209.152.181.224:25 - DIRECT/209.152.181.224 -
1104271802.066 20403
?
[Scott Phalen]
Yes, I am seeing all the normal traffic, just the timeout_direct stood out
in the access.log.
You should focus on what Squid2 and Squid3 says when they forward the
requests to your Squid1.
Snipet from access.log on squid1:
1102825620.299105 199.86.18.23 TCP_MISS/200 298 GET
intercepting port 80 and forwarding such
requests to squid1, even when the requests are from squid2 or squid3.
Regards
Henrik
[Scott Phalen]
You are most likely correct in thinking something is sending back to
squid1. I will have to talk to my router guys but what I think is happening
is squid2
.
[Scott Phalen]
What version of squid are you using?
What do your ACLs look like?
Are you forwarding all requests to another cache? Or cache array?
This is probably a good start:
Acl mynetwork src 192.168.1.0/24
http_access allow mynetwork
always_direct allow mynetwork
or
comment out both
as siblings. Is
this the correct way? Or should they be parents to each other?
The reason for squid1... I redirect all HTTP traffic via WCCP so squid1 is
attempting to act as a redirector only. No caching. Any advice would be
greatly appreciated!
Scott Phalen
[Scott Phalen]
The odd thing is, there is no cache_peer or cache_peer_access directive
telling squid2 or squid3 anything about squid1. I have been all over the
configurations trying to figure out why 2 or 3 would send their data back to
squid1. I am seeing timeout_direct lines in the access.log
directives telling so should be done.
Regards
Henrik
[Scott Phalen]
Here are my squid.conf from each server (brief versions):
Squid1
http_port 3128
cache_peer carp1.mydomain.com parent 3128 0 carp-load-factor=0.4
cache_peer carp2.mydomain.com parent 3128 0 carp-load-factor=0.6
Squid2 (carp1
I have searched the FAQs but can't find a resolution.
I am using squid2.5STABLE6 on a Dell PowerEdge 650, RedHat AS2.1. All my
clients get to squid via wccp, squid hands off certain subnets to
DansGuardian running on the same box, then the requests are sent to another
squid box before going out
I have 3 cache_peer devices I forward all my traffic thru from Squid. The
first two are McAfee E500 virus scanning devices and the 3rd is a
DansGuardian proxy running on the same squid box.
I have the cache_peer 127.0.0.1 going to DG for content scanning then back
to squid to be routed to
Ok, this issue appears on certain (e.g. carsoup.com, ebay.com, wsca.org)
sites when Squid is sending the request to DansGuardian. I can access the
sites if I put them in my always_direct directive. The actual 400 error
is page cannot be found. Anyone have any ideas? Below is a copy of my
I have done some searching and can't find a resolution to my current issue.
I am running Linux RedHat AS2.1 on a dell poweredge 650, 2gig of ram, plenty
of storage. Squid is configured to accept requests via WCCP and forwards to
DansGuardian then out to the internet.
The issue is on certain
Ok, here is my squid.conf.mlist. I think this issue only happens when squid
forwards the requests to DansGuardian. When I put the concerned domains in
the always_direct directive they work fine. Here is my set up:
Client Squid DG Mcafee Webshield Internet
99.9% of the
All my users are redirected to the cache via wccp. From squid I'd like
to
have certain users or subnets redirected to DansGuardian for content
iltering. I am able to make this work fairly easy if I configure squid to
use DansGuardian as a cache_peer, but this affects all users and is not
I was searching the FAQs but not exactly sure what I am searching for.
I think what I am trying to do is use cache_peer groups or access, I am not
sure. I will explain my scenario and if anyone has a solution I'd greatly
appreciate it.
Squid2.5STABLE6
Linux AS2.1
Dell PowerEdge 650, 2gig RAM
I have just taken charge of a squid machine that has been showing some
strange behaviour for the last few weeks. For one thing, cpu usage on
average is above 80% which seems abnormally high, even taking into
account the high traffic (about 2000 hits per minute).
Secondly, all traffic randomly
SP maximum_object_size_in_memory 8 KB
8k??
i do think you need to increase this...
SP Storage Mem size: 18396 KB
i think it's too low because of maximum_object_size_in_memory
-
I made the change you suggested but that didn't help a bit.
I see network address (10.0.0.0), but not a host address (10.0.0.14).
What can i fix in my config?
I have similar default squid.conf.
In your squid.conf there is a section for client netmask
client_netmask 255.255.255.255
I have searched the archives and can't seem to find a solution to my issue.
My server sits at 92% or higher for CPU utilization 24 hours a day. During
my peek hours it is at 99% with a load of about 35 requests per second. My
plan is to convert this to reiserfs in the next month and spread the
Out of the FAQ
3.11 Is it okay to use separate drives and RAID on Squid?
RAID1 is fine, and so are separate drives.
RAID0 (striping) with Squid only gives you the drawback that if you lose one
of the drives the whole stripe set is lost. There is no benefit in
performance as Squid
I would be very interested in your setup, especially
the following:
o How you got the GRE tunnel going
o How the firewall config was affected by the GRE
setup
o What errors, if any, you got from the kernel after
compiling wccp into it
I am trying to configure LDAP_AUTH for clients on a netware network. I can
get the login prompt to come up but I get the below error when logging in:
Am I missing something in this config?
The following error was encountered:
* Cache Access Denied.
Sorry, you are not currently allowed to request:
We need a way to filter based on the whole MIME replied header or on select
mime fields (filename) to cath this downloads.
I created an ACL to block by keyword, e.g. dialerexe. This will block any
URL that contains that word in the URL string. IF a user attempts to reach
a legitimate site with
Something like dansguardian might do the trick.
If you could do regex based on the MIME filename field or the whole mime
replied header, then you can filter something like filename=.*\.exe
stopping all .exe downloads, but you cant.
You have the MIME type from the logs you showed us
I'm new to squid. Would it be possible to set squid between my network
and my border router and just grab ALL the traffic headed to the web or web
related services(given port numbers etc...)? I am trying to get away from
relying on network nodes to be configured properly. If I can get
Try Calamaris. Excellent tool!!
http://cord.de/tools/squid/calamaris/Welcome.html
Regards,
Scott
-Original Message-
From: Endre Szekely-Bencedi [mailto:[EMAIL PROTECTED]
Sent: Monday, March 01, 2004 5:46 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] Squid Log Analyzer
Thanks, Henrik! I have been watching my squid server all weekend and have
noticed it hovers around 5-8MB of free RAM. I have a windows background and
fairly new to the linux world. I saw no RAM as a bad thing. But everything
seems to be working fine.
Scott Phalen
-Original Message
windowsupdate.microsoft.com updates OR a link to show me how to create this
config?
Thanks in advance for any help.
Scott Phalen
It is a product of Microsoft Windows. No fees but you have to have a server
running IIS and update services. There is information on Microsoft's
website regarding this.
Scott Phalen
-Original Message-
From: lis Tams [mailto:[EMAIL PROTECTED]
Sent: Saturday, February 21, 2004 10:50 AM
but saving loads of bandwidth.
Is there a way to redirect a specific URL in squid to a web server inside
the network?
Scott Phalen
-Original Message-
From: Serassio Guido [mailto:[EMAIL PROTECTED]
Sent: Saturday, February 21, 2004 11:07 AM
To: Scott Phalen; [EMAIL PROTECTED]
Subject: Re
signed, it will not be installed.
-Original Message-
From: Serassio Guido [mailto:[EMAIL PROTECTED]
Sent: Saturday, February 21, 2004 12:02 PM
To: Scott Phalen; [EMAIL PROTECTED]
Subject: RE: [squid-users] Redirecting Windows Update
Hi,
At 18.12 21/02/2004, Scott Phalen wrote:
Actually
This is what I am looking for. What is the redirection command in the
squid.conf?? I have searched and can't find one.
Thanks,
Scott
-Original Message-
From: Mark A. Lewis [mailto:[EMAIL PROTECTED]
Sent: Saturday, February 21, 2004 1:09 PM
To: Serassio Guido; Scott Phalen; [EMAIL
.
Is there something else I should be configuring besides cache_mem and
cache_dir to minimize memory use?
Any help would be greatly appreciated!
Scott Phalen
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 18, 2004 1:11 PM
To: Mouque, Eric
Cc
: Friday, February 20, 2004 12:21 PM
To: Scott Phalen
Cc: [EMAIL PROTECTED]
Subject: RE: [squid-users] FW: Squid memory utilization
On Fri, 20 Feb 2004, Scott Phalen wrote:
Ok, I have been all over the memory FAQs and still can't figure out why my
server is consuming all it's RAM. I have 2GIG
Wessels [mailto:[EMAIL PROTECTED]
Sent: Friday, February 20, 2004 12:46 PM
To: Scott Phalen
Cc: [EMAIL PROTECTED]
Subject: RE: [squid-users] FW: Squid memory utilization
On Fri, 20 Feb 2004, Scott Phalen wrote:
Thanks for showing me that command. Here is the info you asked for.
Anything I should
to 5
16 256. What I am concerned about is running out of RAM. From the output
of my top screen... there is no swap being used.
Is there a point where squid will stop consuming more RAM?
Does it hurt the caching process with frequent reboots?
Thanks in advance for your advice.
Scott Phalen
the memory is climbing pretty fast.
Any help would be greatly appreciated!
Scott Phalen
Information Services
City of St Paul
46 matches
Mail list logo