Re: [squid-users] cpu load boom when rotate the access.log(coss filesystem)

2008-04-01 Thread Felix New
Amos:
  thank you very much. i  appreciate you if give me some detail.


2008/3/28, Amos Jeffries [EMAIL PROTECTED]:

 Ah a few problems with COSS. Firstly it does not handle large objects
 very well.
 Secondy its reload requires reading into memory the entire cache_dir
 slice by slice. Which is extremely slow the larger the dir.

 You would get better performance splitting your cache into two
 cache_dirs one COSS (max around 2GB) for small objects and one ufs/aufs
 for large objects.


my every cache_dir disk capability is lager than 100G, and the cache
box is server for very small files--this is the reason why i use COSS.
as your advice, i need split the cache into about 50(or more)
cache_dirs and several aufs for large objects( if exists)...is this?

why it can get better performance splittint big cache into several cache_dirs?


-- 
Best regards
Felix New


Re: [squid-users] cpu load boom when rotate the access.log(coss filesystem)

2008-03-27 Thread Felix New
thanks for your reply

1. the version i used is 2.6.STABLE19
$ squid/sbin/squid  -v
Squid Cache: Version 2.6.STABLE19

2. the os is red hat enterprise edition 4 update 4, and the file
system of cache dir is ext3, cache_dir is coss:
the cache_dir line in squid.conf:
cache_dir coss /cache/coss 8000 max-size=100 block-size=512
max-stripe-waste=32768 membufs=30

by the way,  i want to know whether the time is random when squid
rebuild its cache_dirs? it rebuild the cache_dirs when start, and i
found it rebuild them random running and i didn't restart that
process.

2008/3/26, Amos Jeffries [EMAIL PROTECTED]:
 Felix New wrote:
  hi all,
 
 i have used aufs file system for a few days and that is very good.
  but i encounter a question when i chang the aufs to coss: the cpu load
  is very very high(100% nearly) when i rotate the squid access log file
  with command 'squid -k rotate', and can not fall down.
 
  i google that and find a article about
  that:http://www.freeproxies.org/blog/2007/12/29/advanced-squid-issues-upkeep-scripts-and-disk-management/
---
  If you have a script rotate your squid logs (as you should have),
  and the squid cache is rebuilding when you are rotating your logs,
  squid will not accept any more connections until it has finished
  rebuilding the storage.

is this a squid bug?how to fix it?
 
 thank you.
 
 

 FAQ #1: Which squid release are you using?

 FAQ #2: What exact configuration are you using (minus default comments)?

 Also: What system setup do you have underneath squid? disks and
 cache_dirs, etc.

 Amos
 --
 Please use Squid 2.6STABLE17+ or 3.0STABLE1+
 There are serious security advisories out on all earlier releases.



-- 
Best regards
Felix New


[squid-users] cpu load boom when rotate the access.log(coss filesystem)

2008-03-26 Thread Felix New
hi all,

   i have used aufs file system for a few days and that is very good.
but i encounter a question when i chang the aufs to coss: the cpu load
is very very high(100% nearly) when i rotate the squid access log file
with command 'squid -k rotate', and can not fall down.

i google that and find a article about
that:http://www.freeproxies.org/blog/2007/12/29/advanced-squid-issues-upkeep-scripts-and-disk-management/
  ---
If you have a script rotate your squid logs (as you should have),
and the squid cache is rebuilding when you are rotating your logs,
squid will not accept any more connections until it has finished
rebuilding the storage.
  
  is this a squid bug?how to fix it?

   thank you.


-- 
Best regards
Felix New


[squid-users] about bytes sent item in access.log

2007-04-25 Thread Felix New

hi,all

i'm writing to ask some questions about the bytes squid sent( format
code: st ) what is logged in file access.log.

we found the figure loged in access.log is the bytes sent to adapter
buffer but not client. If is it correct what i understand? If it is
right, how can i get the bytes sent to CLIENT but not buffer area?

There are some lines in my /etc/sysctl.conf (OS: RHEL 4.3), if is it related?

net.core.wmem_max = 900
net.core.wmem_default = 900
net.ipv4.tcp_wmem = 92160 500 13107200

thanks.

--
Best regards
Felix New


Re: [squid-users] Question about gzip Content-Encoding of IIS 6.0

2007-04-11 Thread Felix New

i have install the squid-head version 2.HEAD-20070411,and http11 patch.

but the question is: the request use http-1.1, but the response use http-1.0.


i use the following directs( others is the default):
http_port 80 transparent
cache_vary on
server_http11 on
detect_broken_pconn on
request_entities on

by the way: if i write http_port with option of http11(http_port 80
transparent http11), i get a error when restart the squid: FATAL:
Bungled squid.conf line 97: http_port 80 transparent http11.

following are the request and response http headers:
(Request-Line)  GET / HTTP/1.1
Accept  */*
Accept-Encoding gzip, deflate
Accept-Language zh-cn
Connection  Keep-Alive
Cookie  
WT_FPC=id=26191a2d324ebd7ca751176346470578:lv=1176346470578:ss=1176346470578
Hostwww.mydomain.com
User-Agent  Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)



(Status-Line)   HTTP/1.0 200 OK
Accept-Ranges   bytes
Cache-Control   max-age=60
Content-Length  219502
Content-Locationhttp://www.mydomain.com/index.htm
Content-Typetext/html
DateThu, 12 Apr 2007 03:45:49 GMT
ETag6269eafeb47cc71:688
Last-Modified   Thu, 12 Apr 2007 03:45:21 GMT
Server  Microsoft-IIS/6.0
Via 1.0 test:80 (squid/2.HEAD-20070411)
X-Cache MISS from test
X-Powered-ByASP.NET


2007/4/6, Henrik Nordstrom [EMAIL PROTECTED]:

fre 2007-04-06 klockan 12:49 +0800 skrev Felix New:

IIS requires HTTP/1.1 to do dynamic gzip encoding. Thats what started
the http11 branch.. (a customer using Squid as reverse proxy infront of
IIS).

Regards
Henrik





--
Best regards
Felix New


Re: [squid-users] please help for dealy pools

2007-04-09 Thread Felix New

Hi,
maybe you need the direct of delay_pools, you can read the document in
the squid.conf.

btw: did you build squid with the  --enable-delay-pools option?if not,
rebuild it.

2007/4/9, squid learner [EMAIL PROTECTED]:

I am not very much experienced in squid but working
fine with help of experts from this mailling list
my cache is running
Here i have 50 clients and two dsl 512 lines  with
round robin in squid

Problem is when some client open lot of downloads the
speed drops i want to put the exact documentation
that squid didnt alow clint more then 3 downloads and
also keep the download on half of the normal speed

so other clients get the normal speed
plaese give me some about i will put in my squid.conf

Thank you




--
Best regards
Felix New


[squid-users] many acl and refresh_pattern lines reduce the performance

2007-04-09 Thread Felix New

hi,

   Our squid node have a lot of acl and refresh_pattern lines--about
several hundred lines, i think the low performace is related with
this.

   A lot of friends in this mail list said that SquidGuard can manage
the acls and get the more performance, but how to manage so many
refresh_patterns?

   There are so many domains with different demand in my squid.conf.
Is there a tool to resolve this? or can i write a addtional programe
to do that?

   Thank you.
--
Best regards
Felix New


[squid-users] Question about gzip Content-Encoding of IIS 6.0

2007-04-05 Thread Felix New

Hi,all

our squid boxes run very well with gzip Content-encodeing (the source
server is apache v2.2.3) last days, but the trouble come in when we
add a IIS 6.0 server: gzip encode disappear...

   when i visit the pages on the iis 6.0, i see the data is encoding
with gzip,but  when squid get data from iis 6.0, the squid decode the
data( or squid get the uncode data) and send it to the client without
compress,though the page displays without error.

   but all thing is ok when the source server is apache v 2.2.3: gzip
encoding in apache, squid and the client get the compress data.

  what's wrong? is the squid not support iis? or configuration added
is needed?

  thanks.

The header from squid with iis 6.0 parent (there are not lines of
Content-Encoding gzip and Vary Accept-Encoding):

(Status-Line)   HTTP/1.0 200 OK
Accept-Ranges   bytes
Cache-Control   max-age=60
Connection  keep-alive
Content-Length  221686
Content-Locationhttp://www.mydomain.com/index.htm
Content-Typetext/html
DateFri, 06 Apr 2007 04:19:56 GMT
ETag12d8eeba278c71:686
Last-Modified   Fri, 06 Apr 2007 04:19:12 GMT
Server  Microsoft-IIS/6.0
X-Cache MISS from WT-TJTG-25-7.mydomain.com
X-Powered-ByASP.NET

This is the correct header:
(Status-Line)   HTTP/1.0 200 OK
Accept-Ranges   bytes
Age 146
Cache-Control   max-age=300
Connection  keep-alive
Content-Encodinggzip
Content-Length  22047
Content-Typetext/html
DateFri, 06 Apr 2007 03:44:28 GMT
Expires Fri, 06 Apr 2007 03:49:28 GMT
Server  Apache/2.2.3 (Unix)
VaryAccept-Encoding
X-Cache HIT from www.mydomain.com
X-Cache HIT from CT-SHZD-237-49.mydomain.com

--
Best regards
Felix New


[squid-users] heavy load server's performance

2007-03-21 Thread Felix New

Hi,all:

   Our squid box's performance is not very good(4 reqs/min), cpu
load is very high, and the traffic can't great bigger than
100MBits/s.(Gbits/s network).


   We use aufs scheme now.We have a test for coss, and the result is
not very good also(cpu overload also).

   can we optimize continue?Is the best performace that our hardware
can achived?where is the bottle-neck?

   what is the best performance of your box?

there are some infomations:

++ hardware:
   cpu: Xeon 1.8G(Hyper-Threading)
   os: RHAS 4.3
   file discriptor num: 32768
   mem: 2G
   disk: OS:17G(scsi) for OS,
   cache:73G(SCSI,1rpm for cache data)

++ squid.conf
   about 100 ACLs
   cache_mem 1500 MB
   cache_dir coss /dev/sdb1 5 max-size=1000 block-size=8192
   (some server use aufs: cache_dir aufs /cache/cache 5 240 256,
and sdb1 is mounted with options of async and noatime)

++ top:
top - 11:37:46 up 42 days, 12:00,  1 user,  load average: 1.00, 0.99, 0.91
Tasks:  54 total,   3 running,  51 sleeping,   0 stopped,   0 zombie
Cpu(s): 41.8% us,  8.3% sy,  0.0% ni, 47.9% id,  1.3% wa,  0.7% hi,  0.0% si
Mem:   2074980k total,  1979160k used,95820k free,   686568k buffers
Swap:  4192956k total, 3404k used,  4189552k free,   355632k cached

 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
3738 squid 25   0  918m 714m 2104 R 99.9 35.3  63:59.56 squid
3901 root  16   0  3664  912  740 R  0.3  0.0   0:00.04 top
   1 root  16   0  3312  236  208 S  0.0  0.0   0:04.01 init

++   mgr:info

Squid Object Cache: Version 2.6.STABLE12
Start Time: Thu, 22 Mar 2007 02:34:09 GMT
Current Time:   Thu, 22 Mar 2007 03:21:32 GMT
Connection information for squid:
   Number of clients accessing cache:  9967
   Number of HTTP requests received:   1705959
   Number of ICP messages received:0
   Number of ICP messages sent:0
   Number of queued ICP replies:   0
   Request failure ratio:   0.00
   Average HTTP requests per minute since start:   36012.7
   Average ICP messages per minute since start:0.0
   Select loop called: 1805771 times, 1.574 ms avg
Cache information for squid:
   Request Hit Ratios: 5min: 97.9%, 60min: 97.6%
   Byte Hit Ratios:5min: 83.9%, 60min: 78.7%
   Request Memory Hit Ratios:  5min: 9.6%, 60min: 18.2%
   Request Disk Hit Ratios:5min: 16.3%, 60min: 7.5%
   Storage Swap size:  240512 KB
   Storage Mem size:   241140 KB
   Mean Object Size:   26.95 KB
   Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
   HTTP Requests (All):   0.00919  0.00562
   Cache Misses:  0.09736  0.10281
   Cache Hits:0.00919  0.00562
   Near Hits: 0.06640  0.05331
   Not-Modified Replies:  0.00865  0.00463
   DNS Lookups:   0.03868  0.06364
   ICP Queries:   0.0  0.0
Resource usage for squid:
   UP Time:2842.262 seconds
   CPU Time:   2911.327 seconds
   CPU Usage:  102.43%
   CPU Usage, 5 minute avg:87.11%
   CPU Usage, 60 minute avg:   102.57%
   Process Data Segment Size via sbrk(): 378764 KB
   Maximum Resident Size: 0 KB
   Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
   Total space in arena:  378764 KB
   Ordinary blocks:   297942 KB  10127 blks
   Small blocks:   0 KB  0 blks
   Holding blocks:401944 KB  6 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:   80821 KB
   Total in use:  699886 KB 90%
   Total free: 80821 KB 10%
   Total size:780708 KB
Memory accounted for:
   Total accounted:   552999 KB
   memPoolAlloc calls: 147261049
   memPoolFree calls: 147041272
File descriptor usage for squid:
   Maximum number of file descriptors:   32768
   Largest file desc currently in use:   6319
   Number of file desc currently in use: 5784
   Files queued for open:   0
   Available number of file descriptors: 26984
   Reserved number of file descriptors:   100
   Store Disk files open:   0
   IO loop method: epoll
Internal Data Structures:
 8976 StoreEntries
 4571 StoreEntries with MemObjects
 4537 Hot Object Cache Items
 8924 on-disk objects
--
Best regards
Felix New


Re: [squid-users] heavy load server's performance

2007-03-21 Thread Felix New

2007/3/22, Adrian Chadd [EMAIL PROTECTED]:


Do some profiling to find out whether you're lacking CPU or IO grunt.
I started optimising the Squid-2 codebase to remove a lot of the CPU
hogs but its a big job and noone seemed willing to sponsor any work.

Try out the Squid-2-HEAD codetree and see if you notice any difference.
There's plenty of work that can be done to Squid-2 to really cut
back on the CPU overheads but *shrug* noone showed enough interest.


Adrian




Adian:
Thank your very much.

maybe the bottleneck is not the io subsystem. i have a test with null
cache_dir( not use disk cache),but the cpu is overload also...

i will test the Squid-2-HEAD  codetree later.


--
Best regards
Felix New


Re: [squid-users] About a squid manager system

2007-03-18 Thread Felix New

2007/3/18, SM [EMAIL PROTECTED]:


You'll have to write the code to parse the squid.conf and feed it
into your database.  You can then change the settings through the
interface you designed.  Once the changes are done, generate a new
squid.conf by using the settings stored in your database and send it
to the computer where squid is running.  Then restart squid.

The above can be done in Perl or any language you are familiar with.

Regards,
-sm




Thank you for your reply! what you said just what i want to do,

now, i want somebody can give me some ideas that how to parse the
squid.conf appropriately and how to generate a correct new squid.conf
by the info stored in my database. more or less, the number of
statements of squid.conf bigger than 200, and some statements is
sequence dependent...

syntax analyzer?

--
Best regards
Felix New


Re: [squid-users] About a squid manager system

2007-03-13 Thread Felix New

2007/3/20, Martin A. Brooks [EMAIL PROTECTED]:

It sounds to me like you could benefit from using revision control
software to manage the configuration files.  The tool that springs to
mind is Subversion (see http://subversion.tigris.org ).  Subversion has
a mechanism called hooks that allow you to perform arbitrary actions
when a file is changed.

Regards

--

 Martin A. Brooks | http://www.antibodymx.net/ | Anti-spam  anti-virus
   Consultant| e: [EMAIL PROTECTED]   | filtering. Inoculate
 antibodymx.net  | m: +447896578023   | your mail system.




Thank you for your reply.

Maybe i did not discribe my requirement. We need a front web interface
or other method)client?) to modify the all configurations of all
server( or split the file and store it in database, or xml) easily.
because login, vi, and logout  squid box (a lot) one by one very
tired. i want to split the squid.conf and store to database, so modify
easily with web interface.  but i don't know how to parse, split, and
reassemble squid.conf with appropriate method. consider squid.conf
direct syntax, modify easily with web and so on.

--
Best regards
Felix New


[squid-users] About a squid manager system

2007-03-12 Thread Felix New

Hi all,

   our company have a lot(100) of squid box, and several
engineers login the box and edit squid.conf while add a domain acl, or
modify some other something.

   what the question is, we modify the configure file frequently, add
or modify squid.conf, so the work is very tired.

  so, we want to develop a squid center manager system with a web
interface to manager squid configuration( squid.conf).

  now ,i meet a difficulty: how to split the squid.conf to storage(
database), and reassemble the configure file. some direct is sequence
dependent...

   Could yu give me a hint?or some GNU tools to parse and generate
configure file?

   any hint appreciate...

--
Best regards
Felix New


[squid-users] pazzled about url_regex

2007-02-06 Thread Felix New

ALL:

 i know abou a little perl regular expression, but i'm puzzled about
url_regex in squid.conf when i use perl regular expression syntax, is
anythion diff?

 is url_regex support ! --NOT, -- AND, () -- Group?


--
Best regards
Felix New


Re: [squid-users] pazzled about url_regex

2007-02-06 Thread Felix New

Henrik,

Thank you for your reply.

what make me pazzled is, i want to write a regex within one line in
direct cache deny nocache for easy maintenance. the nocache is my
url_regex name;

my aclname function:

for some urls:
---Star---
img.domain.com/a/*
img.domain.com/b/*
img.domain.com/x/logo.jpg
End---
i want only  cache http://img.domain.com/x/logo.jpg, and all others nocache.

i have writed the direct:
Star---
acl nocache url_regex ^http://a\.com/.*$
acl nocache url_regex ^http://b\.com/.*$
***
acl nocache url_regex ^http://c\.com/.*$
cache deny nocache
End---
So, i want to add one line to above, position with tag Star---.
i write this:
acl nocache url_regex ^http://img\.domain\.com/.*$ 
!^http://img\.domain\.com/x/logo\.jpg
but, all not cached, contain http://img.domain.com/x/logo.jpg what i
want cached.

--
Best regards
Felix New


[squid-users] want bandwidth information of domain names respectively

2007-01-21 Thread Felix New

Hi,all!

We are running a CDN network, making transparent proxy cache service
nodes, and we use configuration of acl acl_name dstdomain
domain.com, and only allow caching the access contents of deployed
websites.

Now we are hoping to acquire the bandwidth information of every domain
name, such as related squid.conf deployment:

acl cache dstdomain a.com
acl cache dstdomain b.com
acl cache dstdomain c.com

I want the bandwidth information of a.com, b.com and c.com respectively.

Before we get this information by analyzing access.log, but this has
some problems (e.g: it is not accurate when the file is large). Would
you please tell me if there is a better way?

Regards
Felix
--


    Felix New