Probably this a bug w/ Virtualbox's NAT setup. The stalling was only
happening to sites on the same subnet as the browser.
mike
On 4/6/2010 1:49 AM, Amos Jeffries wrote:
Mike Leong wrote:
I have this weird problem where if I go though squid, a page would
stall until it hits the read
Well, I based my argument from the 10 instances of reverse proxies
I'm running. It has 266,268,230 objects and 3.7 TB of space. CPU
usage is always around 0.2 according to ganglia. So unless you have
some other statistics to prove CPU is that important, I'm stick w/ my
argument that disk and
At 12:09 PM 7/5/2008, Marcus Kool wrote:
Michel wrote:
On tor, 2008-07-03 at 12:04 +0800, Roy M. wrote:
We are planning to replace this testing server with two or three
cheaper 1U servers (sort of redundancy!)
Intel Dual Core or Quad Core CPU x1 (no SMP)
Squid uses only one core, so
Squid is IO and memory bounded, not cpu bounded. Use the CPU money to
buy more RAM/disks
mike
At 12:09 PM 7/5/2008, Marcus Kool wrote:
Michel wrote:
On tor, 2008-07-03 at 12:04 +0800, Roy M. wrote:
We are planning to replace this testing server with two or three
cheaper 1U servers (sort
The cpu doesn't do any IO, it's WAITING for the disk most of the
time. If you want fast squid performance, CPU speed/count is
irrelevant; get more disks and ram. When I mean more disk, I mean
more spindles. eg: 2x 100GB will is better than a 200GB disk.
mike
At 01:51 PM 7/5/2008, Michel
Hmm, that's weird because I don't have any ACL that would require
DNS. See my original post w/ the squid config.
mike
At 10:46 PM 6/6/2008, Henrik Nordstrom wrote:
On fre, 2008-06-06 at 15:30 -0700, leongmzlist wrote:
Does squid still use dns for reverse proxy requests? All my requests
My cache performance is acting strange; I'm getting extremely high
tcp_hit times for cached objects:
1212787643.465 50343 10.2.7.22 TCP_HIT/200 19290 GET http://cache-int/
1212787737.740 15212 10.2.7.25 TCP_HIT/200 11511 GET http://cache-int/
Those high times comes in bursts. Eg:
have 1 orginal-server defined and is used as the default, so
shouldn't squid just goto the backend w/o dns lookups?
thx,
mike
At 03:10 PM 6/6/2008, Henrik Nordstrom wrote:
On fre, 2008-06-06 at 14:38 -0700, leongmzlist wrote:
My cache performance is acting strange; I'm getting extremely high
What is the swap usage? I once had the same problem w/ squid
degrading over time. I had to reduce the cache_mem from 2GB to
512MB, and reduce the amount of objects in the cache since the index
was growing too big.
mike
At 11:35 AM 2/26/2008, Guillaume Smet wrote:
Hi squid-users,
We
At 07:39 PM 2/14/2008, Adrian Chadd wrote:
On Thu, Feb 14, 2008, leongmzlist wrote:
I use 32bit squid and its currently using 3.8GB of ram. 32bit squid
has a 4G limit
32 bit squid has a 2 gig limit. I suggest you check whether its actually
32 bit, and if it is, I'd love to know which
Yes, each 32bit app can have up to 4GB of ram in a 64bit environment.
mike
At 06:40 PM 2/15/2008, J. Peng wrote:
On Sat, Feb 16, 2008 at 4:20 AM, leongmzlist [EMAIL PROTECTED] wrote:
Running 32bit squid on 64bit Linux.
really? does this cause a 32bit squid support more than 2G memory?
I use 32bit squid and its currently using 3.8GB of ram. 32bit squid
has a 4G limit
mike
At 06:54 PM 2/13/2008, J. Peng wrote:
I found that 32-bit squid can run max memory of 1.8G.
does a 64-bit squid support much larger memory than the limit above?
where to get a 64-bit squid source? thanks!
We're using IPVS/LVS in our configuration. We have 13 squid
instances running, 10 running debian w/ 32bit squid and 3 running on
solaris 9 on sun netra x1. The netras are for low traffic stuff.
The 2 load balancers are dell 1U boxes have quad intel nics running
debian and packages from
HOw does your queries look like? if the url contains cgi-bin, ?,
by default squid wont cache them
mike
At 01:15 PM 12/18/2007, Martin Jacobson \(Jake\) wrote:
I don't understand why I am having so much trouble getting something
that seems to be so simple working. I have downloaded and
That 2GB cache_mem crash seem to indicated you're using a 32bit
version of squid. Squid requires memory to index every object in
your cache; so, if you allocate 2GB for cache_mem, you have 2GB for the index.
mike
Try increasing the ICP/HTCP response time. The auto timeout
detection doesn't work too well for me, and my peers are on the same switch.
mike
At 06:01 PM 10/11/2007, Tony Dodd wrote:
Hey All,
Been working on rolling out HTCP cache_peer relationships within my
squid cluster, but I'm running
Addendum to my previous mail regarding COSS store rebuild speed; for
people that are using the COSS store, how big is your cache_dir? Are
you using multiple files or 1 large file? What's your disk configuration?
thanks,
mike
Does squid read the entire coss file when it starts up and does the
store rebuild? I'm planning to have 480GB store, using 48 10GB coss files.
I have 6 disks, so each disk will have 8 COSS files, each coss file @ 10GB
During my initial test, it takes ~120 seconds to read through a 10GB
coss
Hi,
The list has been very quiet w/ regards to the coss storage.
1. is it still being developed/maintained?
2. is it production ready?
mike
I remember reading somewhere coss will be removed from the 3.0
release, just want to make sure it's won't get abandoned since our
cache data is really important to us.
mike
At 03:58 PM 10/3/2007, Adrian Chadd wrote:
On Wed, Oct 03, 2007, leongmzlist wrote:
Hi,
The list has been very quiet
You can create a fifo pipe and have squid accesslogs log to that
pipe. On the other end of the pipe, write a custom log processor to
look for what and do what you want.
Note: squid takes logging VERY SERIOUSLY. If you log processor
crashes or blocks, that'll impact/crash squid.
mike
At
Follow up to my previous email:
http://wiki.squid-cache.org/SquidFaq/SquidLogs
mike
At 10:34 PM 9/30/2007, Guangwei Yuan wrote:
Hi,
We setup squid in the accelerator mode. To prevent potential cache
poisoning, we validate the request url in the backend, and return http
status 400 if the url
also forget to mention
# hierarchy_stoplist cgi-bin ?
mike
At 05:08 PM 9/3/2007, Gert Verhoog wrote:
Adrian Chadd wrote:
On Tue, Sep 04, 2007, Gert Verhoog wrote:
I'm beginning to suspect that refresh_pattern ignores query strings, but
hopefully I'm wrong. Currently I'm not caching urls
Depends on what kind of authentication. If it's basic auth, squid
will cache the object w/o the auth info. So, B would get A's object,
if the request is the same. Note: B will will get the object
regarding if the authentication is correct since squid cached the
object w/o the auth info
At 08:47 PM 8/28/2007, Deephay wrote:
On 8/29/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
On tis, 2007-08-28 at 22:09 +0800, Deephay wrote:
Greetings all,
I want to have a large acl list for my squid transparent proxy
(10,000 entries) for url filtering. My question is: will the
I want to have certain requests go directly to the origin server, and
don't bother checking the siblings to see if they have a cached
copy. I'm trying to reducing the amount of ICP packets since I know
certain requests are never cached.
here's my settings:
cache_peer 1.2.3.3 parent 80 0
Seems pretty obvious. You need to reduce the size of your
cachedir. You can find more info on the appropriate cachedir size in
the faq, I think
mike
At 11:42 AM 7/27/2007, Parveen Parashar wrote:
-- Forwarded message --
From: Parveen Parashar [EMAIL PROTECTED]
Date: Sat, 28
forward: office users - squid - the internet
reverse: world - squid - your webservers.
mike
At 06:30 PM 7/25/2007, Ming-Ching Tiew wrote:
Believe it or not, I got problem understanding the basics.
What's the difference between forward and reverse proxy.
When I read the article,
many scanners look for open proxies on port 3128 and 8080.
mike
At 12:52 PM 6/29/2007, Tek Bahadur Limbu wrote:
Reid wrote:
Thank you everyone for your help. The dedicated server company
where my squid is located has just
reported to me that we are blocking outgoing connections on
tcp/3128
block audio/video mimetypes. Won't block everything, should do work
w/ most sites.
acl audiofiles req_mime_type -i ^audio/.*
acl videofiles req_mime_type -i ^video/.*
http_access deny audiofiles
http_access deny videofiles
see squid.conf for examples
mike
At 09:52 PM 6/26/2007, revathi
in http_access rules. It only has
# # effect in rules that affect the reply data stream such as
# # http_reply_access.
mike
At 11:14 PM 6/26/2007, leongmzlist wrote:
block audio/video mimetypes. Won't block everything, should do work
w/ most sites.
acl audiofiles req_mime_type -i
by default, squid doesn't cache anything url that contains the ?
and cgi-bin
put the following
acl youtube dstdomain .youtube.com
cache allow youtube
BEFORE the following (they are already defined in squid.conf)
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
Here's a sample request header:
wget -S http://user:[EMAIL PROTECTED]/path/file
HTTP request sent, awaiting response...
1 HTTP/1.0 200 OK
2 Server: Apache-Coyote/1.1
3 Pragma: No-cache
4 Cache-Control: no-cache
5 Expires: Wed, 31 Dec 1969 16:00:00 PST
6 Content-Type:
look at http://workaround.org/squid/cricket/Defaults for some oids.
mike
At 12:24 PM 6/12/2007, Leonardo Rodrigues Magalhães wrote:
Hello Guys,
Im trying to develop some cacti templates
for graphing some squid counters. I have
already done a graph showing HTTP Requests and
HTTP
It seems like your squid is not running on the standard port. Use
the -p flag to define the port. make sure your squid.conf has the following:
acl PURGE method PURGE
http_access allow PURGE localhost
mike
At 08:26 PM 6/12/2007, nonama wrote:
Dear All,
I want to clear only one site in the
-0700 skrev leongmzlist:
Hi
I'm trying reverse proxy urls w/ basic auth
(http://user:[EMAIL PROTECTED]/path/file ).
From a browser, or by using a redirector?
In nearly all cases the browser will just send the URL-path to the
originserver (the reverse proxy).
For the above to work Squid needs
Hi
I'm trying reverse proxy urls w/ basic auth
(http://user:[EMAIL PROTECTED]/path/file ). How will squid
handle that? If squid received a request for the same file, but w/
different credentials, will it hit the cache version or will it hit
the parents?
mike
I had this problem as well. Solved by increasing icp timeout.
icp_query_timeout 7000
works for me
mike
At 02:55 PM 5/18/2007, Pedro de Medeiros wrote:
Hi, squid users.
I am trying to make some proxies talk to each other, but something is
wrong with my squid.conf files and I don't know what
You can setup a IPVS load balancer in front of your squid pool. I
use it load balance my 10 squid servers. See
http://www.linuxvirtualserver.org/
mike
At 07:10 AM 5/11/2007, Adrian Chadd wrote:
On Fri, May 11, 2007, Sean Walberg wrote:
On 5/9/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
1. your cache sibling configuration is wrong; you're not contacting
any of the sibling.
2. use the proxy-only flag if you don't want squid to store cache
hits from the siblings.
try something like
cache_peer sibling1 sibling 80 3130 proxy-only no-delay allow-miss
name=sibling1
mike
At
Actually, we'll wont have duplicate objects since the machines are peered.
mike
At 12:48 AM 5/8/2007, Andrew Miehs wrote:
Hello leongmzlist,
On 08/05/2007, at 2:17 AM, leongmzlist wrote:
We got a server w/ 8GB of RAM for caching lots of small objects in
reverse proxy mode.
I calculated
(((cachedir-size-KB/avg cache obj size -KB)/256)/256)*2= L1 dir. (L2
will always 256)
you can punch that in google ;)
mike
At 12:00 PM 4/13/2007, Brian Elliott Finley wrote:
Thanks, Adrian,
I'm trying the settings changes you have recommended. They now look like:
cache_mem 200 MB
I have a pretty large squid installtion. 6 squid boxes, each has 6GB
of RAM, as a reverse proxy pool. From what I've learned, it's better
to have many small boxes w/ moderate disk space than a huge system w/
lots of space.
mike
At 08:01 PM 4/4/2007, Zak Thompson wrote:
The Iops off the san
Yeah, that's the case in my situation. w/ 4GB on 64bit, squid starts
erroring when it hit the 3.5G mark. Upgrade to 6G and the problem
goes away. I guess the solution for now is to thow more servers into
the pool. We have alot of sun netra x1s (400mhz processor, 1G of
ram, 2 hdd). Is
If that's the case, that would explain it. I did a test by
downloading from a parent cache, at about ~150 req/sec
mike
At 02:52 PM 3/15/2007, Chris Robertson wrote:
leongmzlist wrote:
Hi,
I have the following on my config:
cache_dir coss /data/logs/squid_data/squid_coss.bin 9216
block
Don't have any real stat, since I was only doing a test.
mike
At 11:06 PM 3/14/2007, Tek Bahadur Limbu wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wed, 14 Mar 2007 15:12:56 -0700
leongmzlist [EMAIL PROTECTED] wrote:
Hi,
I have the following on my config:
cache_dir coss /data
Hi,
Looking into solution on reducing squid mem use. I have 32bit squid
and 64bit squid running; but I'm quickly running out of RAM again.
32bit squid on linux 64bit kernel, aufs: ~115 bytes / obj
64bit squid; aufs, linux: ~165 bytes / obj
So, a 32bit squid w/ 4G of RAM can actually store
Hi,
I have the following on my config:
cache_dir coss /data/logs/squid_data/squid_coss.bin 9216
block-size=8192 max-size=32768 membufs=128
cache_dir aufs /data/logs/squid_data/squid-large-files 20480 20 256
cache_swap_log /data/logs/squid_data/%s
All the files smaller than 32K are supposed
check how many objects are in your cache (either via squid snmp, or
/bin/find). Check my previous posts regarding out of memory
errors. Basically, more objects = more ram use.
mike
At 10:55 AM 3/8/2007, Dave Rhodes wrote:
Colin,
Thanks for your reply. I checked into hugemem and it looks
49 matches
Mail list logo