The squid process grows without bounds here. I've read the FAQ, and
tried lowering cache_mem setting, decreasing cache_dir size. That
server has 4GB physical memory, and with total cache_dir size setting
to 60G, squid resident size still can grow beyond bound and start
eating swap.
Note
I've already set memory_pools off in squid.conf, so squid should
free all unused memory. But I can still see these lines in cachemgr
after running for some time:
memPoolAlloc calls: 549769495
memPoolFree calls: 545292412
Did I miss something?
Regards
Thanks for your reply.
# /usr/local/squid/sbin/squid -v
Squid Cache: Version 2.7.STABLE6
configure options: '--prefix=/usr/local/squid' '--with-pthreads'
'--with-aio' '--with-dl' '--with-large-files'
'--enable-storeio=ufs,aufs,diskd,coss,null'
'--enable-removal-policies=lru,heap' '--enable-htcp'
I'm interested to know if you have managed to get this working
reliably for your ISP environment?
Not yet. We are still doing some test in our own environment.
How far have you gone to make Squid truly transparent eg
* suppressing the Squid headers, error messages etc.
* Is there any way
I've tried but with no luck.
The number of close and write operations rose a lot, but with
cache.log reporting queue congestion, squid became jammed, HIT service
time soared up to several seconds and CPU usage was pegged to 100%...
I have to manually restart squid to solve this problem. After
to defer a storage manager transaction from
beginning whilst another one is in progress for that same connection.
I'd suggest using your OS profiling to figure out where the CPU is
being spent. This may be a symptom, not the cause.
adrian
2008/12/7 Bin Liu [EMAIL PROTECTED]:
Hi there,
Squid
Thank you very much for your reply!
Which version of Squid are you using again?
I'm using the latest stable release:
# /usr/local/squid/sbin/squid -v
Squid Cache: Version 2.7.STABLE5
configure options: '--prefix=/usr/local/squid' '--with-pthreads'
'--with-aio' '--with-dl' '--with-large-files'
Hi there,
Squid is pegging CPU to 100% with storeClientCopyEvent and hit
service time soar up to server seconds here. The following is what I
see in cachemgr:events:
OperationNext ExecutionWeightCallback Valid?
storeClientCopyEvent-0.019010 seconds0yes
storeClientCopyEvent-0.019010 seconds0yes
WCCPv2 is supported in all current Squid (2.6+, 3.0+)
The squid just needs to be built with --enable-wccpv2
Amos
--
Please be using
Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
Current Beta Squid 3.1.0.1
Also fully support these two features?
1. Sending the packets squid can not
Hi,
Just read the WCCPv2 document from
http://www.cisco.com/en/US/docs/ios/12_0t/12_0t3/feature/guide/wccp.html.
It seems that WCCPv2 support web caches tunneling back the packets
they do not service to the same router from where they are received.
Once a router has received a formerly redirected
Thanks for your reply.
The redirection in both directions must match for this to work. See the
wiki for a configuration example
http://wiki.squid-cache.org/ConfigExamples/FullyTransparentWithTPROXY
The configuration example does not mention the scenario that one
router talks to *MULTIPLE*
Hi,
I'm going to deploy multiple squid servers in a ISP for HTTP traffic
caching. I'm now considering using WCCP for load balancing and TPROXY
for fully transparent interception.
Here is the problem. As far as I know, Cisco WCCP module does not
maintain connection status, it just redirect
Hmm.. should have been enabled by default.. Fixed. A small typo
prevented WCCP (v1) from being enabled at all.. (done)
For the latest squid-2.6.PRE2-20060612, WCCP (v1) works fine. All OK.
Nice job, thanks.
#
# TAG: wccp2_router
# Use this option to define your WCCP ``home'' router
Hi,
I've already download the newest version(squid-2.6.PRE2-20060609), but
don't know how to compile with both wccp and wccpv2 support.
At first, I fellow the instruction in squid.conf.default, which says
# TAG: wccp_router
# Note: This option is only available if Squid is rebuilt with the
#
First, thanks again for your time to review my question and write back to me.
But there is still something I can't understand.
for me wget has about the same success/failure rate as Squid in getting
speedy responses from the site.
For me the success/failure rate of wget is 100:0 while squid
is there any ACL's in your squid configuration ??
have you try to access the sites with squid default config ??
Already tried from the first begining, even disabled cache, no help.
Hi,
I found that when squid fetching some web pages like
http://en.beijing2008.com/, the connection to the original server
always breaks when only one or two packets arrives. Squidclient
diagnose shows only the first part of the html file arrives, after
that, it seems that squid is waiting for
On 4/4/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
Looks more like a firewall or overload problem to me.
Or wrongly tuned rcp retransmit timers at the origin..
There is no traffic through squid box at that time, and I can use
fetch or wget tools download that web page completely just on the
header_access X-Forwarded-For deny all
On 3/7/06, spcatch55 [EMAIL PROTECTED] wrote:
Is there a way to prevent squid 2.5 from sending the
HTTP_X_FORWARDED_FOR header? My squid.conf contains
forwarded_for off but it still sends unknown as
the value, rather than not sending the header
Squid FAQ has answered this question clearly, you should read it more
carefully next time.
http://www.squid-cache.org/Doc/FAQ/FAQ.html#toc8.7
The following is the script I use, hope it can do you some help.
Simply create a file named 'squid.sh' (or any other name you like) in
path '/usr/local/etc/rc.d', copy the script content in it, save,
change mode to 755, that's all. :)
Wonderful!! I've tested this on my linux box, using iptables. All worked fine.
But here the problem is that my production box is using FreeBSD, and
the only thing I can use is ipfw + natd. Would someone please tell me
how I can figure it out?
.. I use transparent proxy, this is a problem ??
Yes, this IS a problem.
Your ssl requests don't go through squid at all, that is why you can't
see anything in your access.log.
It seems that you have not create GRE tunnel successfully. You can try this:
# ifconfig gre0 create
# ifconfig gre0 link0 tunnel squid_host_IP router_IP up
where:
router_IP is IP address of routers physical interface (not loopback interface)
Besides, I noticed that with IOS 12.x, if you
You should create a GRE tunnel to decapsulate the GRE packets as
Henrik has mentioned in STEP 2:
# ifconfig gre0 create
# ifconfig gre0 link0 tunnel squid_host_IP router_IP up
where:
router_IP is IP address of routers physical interface (not loopback interface)
), then WCCP version 1 is
selected.
- Original Message -
From: Francis Vidal [EMAIL PROTECTED]
To: Bin Liu [EMAIL PROTECTED]
Sent: Thursday, April 07, 2005 8:41 AM
Subject: Re: [squid-users] Help with Squid 2.5 on FreeBSD 5.3 and WCCP
I think you need to specify link2 when you create
My squid box works fine for the last 2 months. But this morning, it's
down. I got this piece in the cache.log:
cache.log
2005/02/03 10:17:15| WARNING: Forwarding loop detected for:
GET /images/web3_pic1.gif HTTP/1.0^M
Accept: */*^M
Referer:
Hi, folks,
I want permanently cache some update packs, including Windows Update,
Symantec LiveUpdate, etc. I've searched the web, and found someone
using squid + apache + perl to achieve this. This is a little more
complicated, and I don't think running redirectors is a good idea
under heavy
file to show?
Could it be that the squid was restarted / shutdown not in a proper
way and that led to a dirty cache?
Jonathan,
On Tue, 23 Nov 2004 11:24:49 +0800, Bin Liu [EMAIL PROTECTED] wrote:
Hi, folks,
My squid box has terminated service twice for these two days. Before
Hi, folks,
My squid box has terminated service twice for these two days. Before
this, it was running pretty well for about two weeks without any
config change. It serves that load about 200 req/s (peak), and 130
req/s (mean).
- System informatio
- AMD Opteron 248 * 2
-
Yep, I think I'm in the same situation.
I think it's better that when we set forwarded_for off in
squid.conf, we should never see X-Forwarded-For: Unknown. when there
is no X-Forwarded-For previously, and squid will not add unknown
when we already have one.
On Wed, 17 Nov 2004 10:12:38
Your box must have two CPUs and with SMP enabled in your OS.
Otherwise, you will see no improvement from it.
Squid will check running PID before working, so when you try to start
a squid while another is running, you will see the message like
Squid already running! Process id 2383. You can open
32 matches
Mail list logo