WCCP v1 + Squid 2.5S9 + kernel 2.6.5 problem

2005-03-23 Thread Muthukumar
Dear Dev Team,

I have problem in configuring the WCCP v1 + Squid 2.5S9 + kernel 2.6.5. The 
following is our configurations and settings. The Squid 
machine and the router are comminocating with the WCCP packets as i get the UDP 
2048 packets to and fro between the router and the 
squid. Please let me know if we missed out anything.

 --
  203.157.193.81   -- Router with wccp v1 IOS 12.2
  --
  |
  |
  |
  |
  ---
 |  |  |
 |  |  |
 |  |  |
203.157.193.82203.157.193.89203.157.193.85
 (squid) (client)   (My system)



Router ip: 203.157.193.81
cache system: 203.157.193.82
Squid version: 2.5stable9

Linux Kernel Version 2.6.5
First used the kernel with ip_gre enabled and compiled
Second time used the ip_wccp patch from squid-cache.org site.
and compiled the kernel with ip_gre and ip_wccp enabled

used

modprobe ip_gre
modprobe ip_wccp

/etc/sysctl.conf

net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
kernel.sysrq = 0

Executed "sysctl -p"



My system for ssh login: 203.157.193.85


Squid.conf
-

wccp_version 4
wccp_router 203.157.193.81

http_port 3128

---

In squid machine (203.157.193.82)



iptunnel add gre1 mode gre remote 203.157.193.81 local 203.157.193.82 dev eth0
ifconfig gre1 127.0.0.2 up

iptables -t nat -A PREROUTING -d ! 203.157.193.82 -i gre1 -p tcp --dport 80 -j 
DNAT --to 203.157.193.82:3128

when i  telnet visolve.com from the client system : 203.157.193.86 i get the 
following output in the tcpdump. but no entries in the 
access.log
I have aslo tried REDIRECT instead of DNAT failed.



[EMAIL PROTECTED] root]# tcpdump -i any 'not ( host 203.157.193.82 and port 22 
) and not host 203.193.157.82 and not port syslog and not 
port domain and not snmp and not port 3632 and not icmp and not host 
204.152.189.116'
tcpdump: WARNING: Promiscuous mode not supported on the "any" device
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 96 bytes
17:58:03.734727 IP 203.157.193.82.2048 > 203.157.193.81.2048: UDP, length 52
17:58:03.736439 IP 203.157.193.81.2048 > 203.157.193.82.2048: UDP, length 64
17:58:11.026858 IP 203.157.193.81 > 203.157.193.82: gre-proto-0x883e
17:58:11.026858 < 883e 64:
0x:  4500 0030 f8e8 4000 7e06 8091 404a b1fe  [EMAIL PROTECTED]@J..
0x0010:  3fc2 5143 0493 0050 56b2 10f8    ?.QC...PV...
0x0020:  7002 4000 5344  0204 05b4 0101 0402  [EMAIL PROTECTED]
17:58:14.035493 IP 203.157.193.81 > 203.157.193.82: gre-proto-0x883e
17:58:14.035493 < 883e 64:
0x:  4500 0030 f8f5 4000 7e06 8084 404a b1fe  [EMAIL PROTECTED]@J..
0x0010:  3fc2 5143 0493 0050 56b2 10f8    ?.QC...PV...
0x0020:  7002 4000 5344  0204 05b4 0101 0402  [EMAIL PROTECTED]
17:58:14.283166 IP 203.157.193.82.2048 > 203.157.193.81.2048: UDP, length 52
17:58:14.285777 IP 203.157.193.81.2048 > 203.157.193.82.2048: UDP, length 64
17:58:20.045910 IP 203.157.193.81 > 203.157.193.82: gre-proto-0x883e
17:58:20.045910 < 883e 64:
0x:  4500 0030 f906 4000 7e06 8073 404a b1fe  [EMAIL PROTECTED]@J..
0x0010:  3fc2 5143 0493 0050 56b2 10f8    ?.QC...PV...
0x0020:  7002 4000 5344  0204 05b4 0101 0402  [EMAIL PROTECTED]
17:58:24.747629 IP 203.157.193.82.2048 > 203.157.193.81.2048: UDP, length 52
17:58:24.750637 IP 203.157.193.81.2048 > 203.157.193.82.2048: UDP, length 64
17:58:34.981967 IP 203.157.193.82.2048 > 203.157.193.81.2048: UDP, length 52
17:58:34.985319 IP 203.157.193.81.2048 > 203.157.193.82.2048: UDP, length 64

Let me know if you need more inputs.

Thank You.



Squid benchmarking problem on EM64T platform

2005-02-26 Thread Muthukumar
Hello All,

I have tried to benchmark Squid on EM64T (Extended Memory 64 bit technology).

System Details
==
System Model - 2.6.5-1.358smp x86_64 x86_64 x86_64 GNU/Linux
Processor Model - model name : Intel(R) Pentium(R) 4 CPU 3.40GHz   X2 
processors
Memory - 256 MB

Processor Activity with vmstat
=
procs ---memory-- ---swap-- -io --system-- cpu
 r   b   swpd  free buff cache   si   so  bibo   in 
csus sy id wa
 1  5 325468   2156   8100   3532 1724  381  1866   381 1710   232  0  1 28 71
 0  1 325468   4368   8248   5324  565  486  1077   498 1256   176  0  1  2 97
 0  1 325492   2540   8256   5400 1932  235  1932   253 1381   238  0  1 42 57

 I have found problem because of CPU as,

1) Proxy Server is working fine during Warm, Inc phase with the CPU usage 
50% to 75%. During top phase cpu usage is reduced to
3.4% - 5% so that benchmarking getting problem on 500 clients itself. System 
error message log /var/log/messages did not have any
messages for the problem.

Is there any way to detect the functionality issues of a processor which causes 
problem for benchmarking?

Plz. share your knowledge on getting problem because of CPU.

Thanks,
-Muthukumar. 



Caching Squid

2005-01-20 Thread Muthukumar
Dear All,

Squid is designed to perform web-caching, filtering, authentication, etc. 
Performance of web-caching part in squid will be changed 
regarding other functionality.

Certain environment is focussing only web-caching. So to fullfill that, what 
are functionality code can be removed on squid to make 
web-caching only. I assume only focused implementation of squid for web-caching 
will give more performance.

lets ur know feedbacks.

regards
-Muthu 



web-caching survery - business type

2004-12-29 Thread Muthukumar
Dear All,

Wishes for New Year 2005.

We are analyzing web-caching usage pattern report based on business 
types like,

Schools & collegs, Small to Medium Business, Enterprises and ISP's. I 
have tried to get the survey with the following links,
http://workshop97.ircache.net/minutes.html
http://www.avantisworld.com/02_cddvd_cds_faqs.asp

Business Type ==Cacheable request %
Schools, Colleges 45% - 50%
Small to Medium business 35% - 45%
Enterprise 25% - 35%
ISP's20% - 30%

IS there any survey report on this?

thanks
Muthu 



squid-3.0 withepoll() vs withoutepoll()

2004-12-20 Thread Muthukumar
Hello Development Team,

We are benchmarking squid performance with epoll() / without epoll() on the 
hardware configuration of,

model name  : Pentium III (Coppermine)
cpu MHz  : 927.753
RAM size : 512 MB
version : Fedora core 2 - 2.6.5

Results with squid-3.0 ( Development ) and squid-2.5S7 (Stable ) are as,

squid-3.0 Pre3 + epoll() + in-core memory + /dev/null fs ( with epoll() ) :
=
req.rate = 371 req / sec
rep.rate = 370 rep / sec

Problem : CPU usage to 100% - CPU bound

  squid-3.0 Pre3 + poll() + in-core memory + /dev/null fs ( without epoll() ) :
  ==
req.rate = 345 req / sec
rep.rate = 344 rep / sec

Problem : CPU usage to 100% - CPU bound

poll() vs epoll() :
using poll(), squid is using 50% - 60% usage for even 50 - 150 requests 
/ sec. epoll() is utilizing around 10-15% cpu usage.
( report based on top tool )

squid-2.5 Stable 7 + poll() + in-core memory + /dev/null fs ( Stable 
version ) :

req.rate=612 req / sec
rep.rate=611 rep / sec

Problem : CPU usage to 100% - CPU bound

Why squid-3.0 withoutepoll() benchmark and squid-2.5 stable 7 results are 
getting differed? epoll() method is making 100% cpu
usage.?  Is it a right behaviour of epoll() I/O method? ( I have used recent 
100% usage patch of epoll() ). NO error messages
appeared in cache.log file.

IF you want to have more informations, let us know.

Thanks for your help.

Regards
Visolve Dev. team.



capacity planning

2004-12-08 Thread Muthukumar
Hello All,

   We are planning to make a tool to give capacity planning to SQUID.

Objective of tool:

1.  Based on number of users, hardware + squid configuration will be 
suggested.
2.  Based on hardware configuration ( HDD, RAM, CPU),
1. number of users can be serviced from squid
2. updation needed on hardware to reach required users count

Using polygraph benchmark tool, we are getting number of req / sec 
satisfaction on squid cache server  ( 2.5, 3.0 ). We are 
preparing metrics for designed hardware set ( CPU, HDD, RAM).

I hope some of us may be done benchmarking on squid-2.5, 3.0 in 
different hardware setups. So that, We are need in of 
prepared req / sec rate satisfaction, hardware configuration, squid 
configuration tunning, linux tunning details to make metrics.

Based on calculated metrics, We are planning to make GUI to get 
capacity planning.

thanks
muthukumar. 



Re: squid benchmarking results - squid-3.0 + epoll()

2004-12-08 Thread Muthukumar
hai gonzalo arana,

thanks for detailed reply.

> Looks like you are running into a CPU bottleneck.  Perhaps you may want
> to add --enable-cpu-profiling to configure, and check cpu usage data
> with cache manager, or compile and link with -pg and check results with
> gprof.

I have done configuration with --enable-cpu-profiling and monitoring with 
cachemgr.cgi script.


> Also, verify whether CPU usage is in kernel-mode or user-mode with vmstat
> during the test (sar gives this information as well).

vmstat result when squid is running at peak load ( 180 req / sec ),

procs ---memory-- ---swap-- -io --system-- cpu
 r  b   swpd   free   buff  cache   si   sobibo   in cs 
 us sy   id wa
 1  0  0 119024  34116  6125200 0   110 1103579 69 31  0  0
 1  0  0  99464  34144  6125600 0   148 13033 26 65 35  0  0

>
> Looks like you are running with (the same?) CPU bottleneck.
>
> epoll's advantage is that CPU usage does not grow with the number of
> idle TCP connections.  If the number of concurrent connections is large,
> and there are no idle connections, epoll should only give a small
> increase in throughput (no cpu is used for traversing the list of fds).
>
> Is, by any chance, throughput (in bps) slightly larger with epoll?

I did not get this. How to measure throughput (in bps)?

>
>> 008.75| Connection.cc:485: error: 1/1 (s110) Connection timed out
>> 008.75| error: raw write after connect failed
>> after these req / sec configuration setting.
>
> Try enabling SYN cookies, and running squid with ulimit -HSn 131072.

I have tuned sysctl -w net.ipv4.tcp_syncookies=1, ulimit -HSn there. But I am 
getting same error reply as,
Connection.cc:485: error: 1/1 (s110) Connection timed out

can we get more req / sec satisfacation for 512 MB RAM, 927.753 MHz cpu (p) 
??
I am using /dev/null file system for benchmarking?

Do you have benchmarking any results for squid?

regards
muthukumar



Re: squid benchmarking results - squid-3.0 + epoll()

2004-12-08 Thread Muthukumar
hai david,

thanks for your reply.

i have tested with squid + epoll() using null fs. it is consuming 90% CPU 
at peak rate ( 180 req / sec ). system cpu idleness is 
in the range of 0.
Are you have any benchmarking results to epoll() with squid there?

how to integrate shared memory with null fs ? I have the configuration 
option to include null fs on squid-3.0 as
cache_dir null /dev/null
cache_mem 200MB

epoll() is servicing only 20 req more compared to poll()? ( 180 req / sec 
epoll() - 160 req / sec poll() ). I have included 
latest patch for epoll().
I have tuned kernel parameters as,

# polyserver, polyclient, squid
echo 1 > /proc/sys/net/ipv4/tcp_timestamps
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
echo 1 > /proc/sys/net/ipv4/tcp_sack

echo 8388608 > /proc/sys/net/core/wmem_max
echo 8388608 > /proc/sys/net/core/rmem_max
echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 4194304" > /proc/sys/net/ipv4/tcp_wmem

# polyserver, polyclient
ulimit -HSn 8192
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.ip_local_port_range="32768 60001"

# squid server
ulimit -HSn 32768
sysctl -w net.ipv4.tcp_syncookies=1

thanks
muthukumar

> Hi,
>
> Based upon my experience.  aufs works best when you have CPU to spare. More 
> often than not it ends up eating up more CPU to 
> scheduling of the threads, than you gain.  Perhaps try ufs in comparison.  
> For a lot of our workloads it is actually faster than 
> aufs on decent disks.  Or even better use ufs or the null fs in combination 
> with /dev/shm (if you can spare the memory).
>



squid benchmarking results - squid-3.0 + epoll()

2004-12-08 Thread Muthukumar
Hello Development Team,

We had a benchmark and got results for the hardware setup,

model name  : Pentium III (Coppermine)
cpu MHz  : 927.753
RAM size : 512

I like to have your review on this. can we get more req / sec satisfaction 
on this setup?

---

squid 3.0 without epoll():
Squid Cache: Version 3.0-PRE3
configure options: '--prefix=/usr/local/squid3pre' '--with-aufs-threads=32' 
'--with-descriptors=32768' '--with-pthreads' 
'--enable-storeio=null,ufs,aufs' '--enable-debug-cbdata'

cache_mem 200MB
cache_dir null /dev/null

top output:

  PID USER  PR  NI  VIRT  RES  SHRS %CPU %MEMTIME+  COMMAND
 6428 squid   25   0  336m 322m 3276   R   99.9 63.8 
3:05.54 squid


Results:
req.rate:167.50
rep.rate:167.17



squid 3.0 with epoll():
Squid Cache: Version 3.0-PRE3
configure options: '--prefix=/home/muthu/squidepoll' '--enable-epoll' 
'--with-aufs-threads=32' '--with-descriptors=32768' 
'--with-pthreads' '--enable-storeio=null,ufs,aufs' '--disable-poll' 
'--disable-select' '--disable-kqueue' '--disable-optimizations' 
'--enable-debug-cbdata'

cache_mem 200MB
cache_dir null /dev/null

top output:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 8358 squid   16   0  425m 407m 3428 R81.2  90.6 1:46.13 
squid

Results:
req.rate:182.35
rep.rate:180.20


I want to have your analysis on this. I am getting errors,
008.75| Connection.cc:485: error: 1/1 (s110) Connection timed out
008.75| error: raw write after connect failed
after these req / sec configuration setting.

Regards,
Muthukumar.




squid-3.0 benchmarking

2004-12-06 Thread Muthukumar
Hello Developers,

We are benchmarking squid-3.0 with / without epoll() I/O method.

With the polygraph tool, poly server is running with 750 alias and polyclt 
with 450 alias ( 450 req / sec ). Squid-3.0 without 
epoll is running with 8192 FD's. Polyclt is getting problem as

006.76| Connection.cc:485: error: 32/32 (s110) Connection timed out
006.76| error: raw write after connect failed
006.76| connection to :3128 failed after 0 reads, 0 writes, 1 xacts

Normally squid server, polyserver is getting TIME_WAIT status. I have tuned 
"tcp_tw_recycle, tcp_rfc1337" to 1 to recycle 
TIME_WAIT status on 2.6 linux kernel.

Questions:


1. Can you suggest needable parameters to be tunned.? 
(http://www.psc.edu/networking/projects/tcptune/#Linux - I tunned all 
of this )

2. More errors in polyclt as,

009.83| Xaction.cc:74: error: 64/269 (c19) unsupported HTTP status code
1102401683.126442# obj: 
http://10.1.131.73:18256/w0b5403f7.348e52ca:03cc/t02/_0011.html flags: 
basic,GET, size: 0/-1 xact: 
0b5403f7.348e52ca:0001d376
HTTP/1.0 503 Service Unavailable
Server: squid/3.0-PRE3
Mime-Version: 1.0
Date: Tue, 07 Dec 2004 06:40:09 GMT
Content-Type: text/html
Content-Length: 2007
Expires: Tue, 07 Dec 2004 06:40:09 GMT
X-Squid-Error: ERR_CONNECT_FAIL 105
X-Cache: MISS from jasmine.kovaiteam.com
Via: 1.0 jasmine.kovaiteam.com (squid/3.0-PRE3)
Proxy-Connection: close

3. I am getting 110 rep / sec for the configuration of 350 req / sec on 
polygraph configuration, so that squid is satisfying 
350 req / sec. Is it correct?

4. how to know squid requests satisfaction saturation point?

5. Is it good to use polymix3 instead of polymix4? which bench is good 
for squid benchmarking?

Regards
Muthu




squid-2.5 s7 polygraph benchmarking

2004-11-30 Thread Muthukumar
Hello All,

When I tried to benchmark squid 2.5 stable 7, getting problem with 
TIME_WAIT on polygraph server.
setup:

polclt <==> squid <==> polyserver
10.1.1.1-5010.1.129.1-250

TIME_WAIT   TIME_WAIT

Polygraph server:

   model name  : Pentium III (Coppermine)
   cpu MHz : 927.748

Polyclt, squid running with configuration:

model name  : AMD Athlon(tm) XP 2400+
cpu MHz : 2001.015

cache_dir 4096 MB
cache_mem 40 MB

Questions:

1. what is the problem to get "X-Squid-Error: ERR_CONNECT_FAIL 113" 
/ HTTP/1.0 503 Service Unavailable?
2. Do we have to tune kernel parameters for benchmarking squid?


Status report from polyclient (2.8.1) error as,

=
000.89| Xaction.cc:74: error: 16/16 (c19) unsupported HTTP status code
1101808153.584234# obj: 
http://10.1.130.98:18256/w0b4af796.794153cd:021a/t04/_0001 flags: 
basic,GET, size: 0/-1 xact:
0b4af796.794153cd:044e
HTTP/1.0 503 Service Unavailable
Server: vicache/2.5.STABLE7
Mime-Version: 1.0
Date: Tue, 30 Nov 2004 09:48:22 GMT
Content-Type: text/html
Content-Length: 1293
Expires: Tue, 30 Nov 2004 09:48:22 GMT
X-Squid-Error: ERR_CONNECT_FAIL 113
X-Cache: MISS from polyclt2
Proxy-Connection: keep-alive

http://www.w3.org/TR/html4/loose.dtd";>

ERROR: The requested URL could not be retrieved


ERROR
The requested URL could not be retrieved


While trying to retrieve the URL:
http://10.1.130.98:18256/w0b4af796.794153cd:021a/t04/_0001";>http://10.1.130.98:18256/w0b4af796.794153cd:021a/t04/_0001

The following error was encountered:




Squid Capacity Plan analysis

2004-11-17 Thread Muthukumar
Hello All,

Is there any specific analysis or research on capacity planning with squid.

I got a thread as,
http://www.squid-cache.org/mail-archive/squid-users/199912/0508.html

Which is discussion about squid capacity planning. But I couldnot access 
the links prescribed there.

Is squid dev team maintaining records on this? Thanks for your help.

Regards
--muthu


---

Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.796 / Virus Database: 540 - Release Date: 11/13/2004 



Re: [squid-users] squid + epoll polygraph test

2004-11-02 Thread Muthukumar
Hai Gonzalo,

> I've been using squid3 with epoll support for a couple of months.
> In my case, squid with poll/select did consume up to 100% CPU.  With epoll, CPU 
> usage dropped to less than 10%.

It seems to be great. How many requests are being generated per second?

Are you using squid-3.0-pre3+latest patch for epoll().
I am on analysis of squid-3.0pre3 + epoll() requests satisfaction / second there.

Compilation:
./configure --prefix=/home/muthu/squidepoll --enable-epoll  
--with-aufs-threads=32 --with-descriptors=32768 --with-pthreads  
--enable-storeio=null,ufs,aufs --disable-poll --disable-select --disable-kqueue

Configuration:

 cache_mem 90 MB( 200 MB RAM )
 cache_dir null /dev/null
 cache_access_log none
 cache_store_log none

> Long term average & max CPU usage:
> http://webs.uolsinectis.com.ar/garana/x/cpu.4.png
>
> With epoll, CPU usage over the last 24 hours:
> http://webs.uolsinectis.com.ar/garana/x/cpu.png

Thanks for informations.

Regards
--Muthu



---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.786 / Virus Database: 532 - Release Date: 10/29/2004 



Re: [squid-users] squid + epoll polygraph test

2004-11-02 Thread Muthukumar

Hello Henrik,

Thanks again.

>> Is there anyone benchmarked squid+epoll() on polygraph? How may I expect requests 
>> satisfaction limit on Linux host
>> 2.6.5-1.358 #1 i686 athlon i386 GNU/Linux platform?
>
> There has not been any benchmark on Squid-3 + epoll in a long time. The performance 
> of this is not known.

How many requests being generated by squid + poll() / squid + select() on 32 bit 
hardwares?
Is squid dev team maintaining reports for this? It is good to have for comparision 
report.


> The Squid developers is currently focused on first getting Squid-3 reasonably stable 
> and correct before looking at performance.

Just curious, when will be squid-3.0 stable released?


>> During polygraph testing, I am getting errors as,
>> 004.03| ./Xaction.cc:79: error: 1/1 (267) unsupported HTTP status code
>> 004.03| ./Xaction.cc:79: error: 2/2 (267) unsupported HTTP status code
>
> Could be many things.
>
> I would recommend starting with a Squid-2.5 to verify that you have the Polygraph 
> setup correct. This should run without any 
> errors except the expected ones..
>
> Then try out Squid-3.

epoll() support will be having on squid-3.0.
 I am tasked to check, how many requests squid + epoll() are supporting.

Regards
-Muthukumar


---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.786 / Virus Database: 532 - Release Date: 10/29/2004 



squid + epoll polygraph test

2004-11-01 Thread Muthukumar
Hello All,

I am preparing epoll() I/O method benchmarking with Polygraph (Polygraph 2.5.5) 
with setup as,

squid + epoll():
Linux host 2.6.5-1.358 #1 i686 athlon i386 GNU/Linux
Squid Cache: Version 3.0-PRE3
configure options: '--prefix=/home/muthu/squidepoll' '--enable-epoll' 
'--with-aufs-threads=32' '--with-descriptors=32768' 
'--with-pthreads'
'--enable-storeio=null,ufs,aufs' '--disable-poll' '--disable-select' 
'--disable-kqueue'

Polygraph server-1:
Linux host 2.4.18-14 #1 i686 i686 i386 GNU/Linux
Polygraph 2.5.5

Polygraph client-1:
Linux host 2.4.18-14 #1 i686 i686 i386 GNU/Linux
Polygraph 2.5.5

Is there anyone benchmarked squid+epoll() on polygraph? How may I expect 
requests satisfaction limit on Linux host 
2.6.5-1.358 #1 i686 athlon i386 GNU/Linux platform?

During polygraph testing, I am getting errors as,
004.03| ./Xaction.cc:79: error: 1/1 (267) unsupported HTTP status code
004.03| ./Xaction.cc:79: error: 2/2 (267) unsupported HTTP status code

--- Polygraph configuration --

Bench benchPolyMix3 = {
peak_req_rate = undef();  // must be set

client_addr_mask = '10.1.0.0';// may be adjusted
server_addr_mask = '10.1.0.0:18256'; // may be adjusted

max_client_load = 800/sec;   // maximum load per Polygraph PC
max_robot_load = 0.4/sec;// maximum robot request rate

client_host_count = undef(); // number of polyclts in the bench
};

ObjLifeCycle olcStatic = {
birthday = now + const(-1year); // born a year ago
length = const(2year);  // two year cycle
variance = 0%;  // no variance
with_lmt = 100%;// all responses have LMT
expires = [nmt + const(0sec)];  // everything expires when modified
};

ObjLifeCycle olcHTML = {
birthday = now + exp(-0.5year); // born about half a year ago
length = logn(7day, 1day);  // heavy tail, weekly updates
variance = 33%;
with_lmt = 100%;// all responses have LMT
expires = [nmt + const(0sec)];  // everything expires when modified
};

ObjLifeCycle olcImage = {
birthday = now + exp(-1year);  // born about a year ago
length = logn(30day, 7day);// heavy tail, monthly updates
variance = 50%;
with_lmt = 100%;   // all responses have LMT
expires = [nmt + const(0sec)]; // everything expires when modified
};

// object life cycle for "Download" content
ObjLifeCycle olcDownload = {
birthday = now + exp(-1year);  // born about a year ago
length = logn(0.5year, 30day); // almost no updates
variance = 33%;
with_lmt = 100%;   // all responses have LMT
expires = [nmt + const(0sec)]; // everything expires when modified
};

// object life cycle for "Other" content
ObjLifeCycle olcOther = {
birthday = now + exp(-1year);  // born about half a year ago
length = unif(1day, 1year);
variance = 50%;
with_lmt = 100%;   // all responses have LMT
expires = [nmt + const(0sec)]; // everything expires when modified
};


// PolyMix-1 content
Content cntPolyMix_1 = {
kind = "polymix-1"; // just a label
mime = { type = undef(); extensions = []; };
size = exp(13KB);
obj_life_cycle = olcStatic;
cachable = 80%;
};

Content cntImage = {
kind = "image";
mime = { type = undef(); extensions = [ ".gif", ".jpeg", ".png" ]; };
obj_life_cycle = olcImage;
size = exp(4.5KB);
cachable = 80%;
};

Content cntHTML = {
kind = "HTML";
mime = { type = undef(); extensions = [ ".html" : 60%, ".htm" ]; };
obj_life_cycle = olcHTML;
size = exp(8.5KB);
cachable = 90%;
may_contain = [ cntImage ];
embedded_obj_cnt = zipf(13);
};

Content cntDownload = {
kind = "download";
mime = { type = undef(); extensions = [ ".exe": 40%, ".zip", ".gz" ]; };
obj_life_cycle = olcDownload;
size = logn(300KB, 300KB);
cachable = 95%;
};


Content cntOther = {
kind = "other";
obj_life_cycle = olcOther;
size = logn(25KB, 10KB);
cachable = 72%;
};

Phase phWait = { name = "wait"; goal.xactions = 1; log_stats = false; };

Phase phCool = { name = "cool"; goal.duration = 1min; load_factor_end = 0; log_stats = 
false; };

Bench TheBench = benchPolyMix3; // start with the default settings
TheBench.peak_req_rate = 200/sec;
size ProxyCacheSize = 12GB;
rate FillRate = 90%*TheBench.peak_req_rate;

TheBench.client_host_count = clientHostCount(TheBench);

// robots and servers will bind to these addresses
addr[] rbt_ips = robotAddrs(TheBench);  // or ['127.0.0.1' ** 2 ];
addr[] srv_ips = serverAddrs(TheBench); // or ['127.0.0.1:8080', '127.0.0.1:' ];

// popularity model for the robots
PopModel popModel = {
pop_distr = pmUnif();
hot_set_frac =  1%;  // fraction of WSS (i.e., hot_set_size / WSS)
hot_set_prob = 10%;  // prob. of req. an object from the hot set
};

// describe PolyMix-3 server
Server S = {
kind = "PolyMix-3-srv";

contents  = [ cntImage: 65%, cntHTML: 15%, cntDownload: 0.5%, cntOther ];
direct_access = [ cntHTML, cntDownload, cntOther ];

xact_think = norm(2.5sec, 1sec);
pconn_use_lmt = zip

Re: [squid-users] flat file parsing vs db filter rules parsing

2004-10-29 Thread Muthukumar

Hello Henrik,

Thanks for youe detailed explanations.

>> We are trying to attain good performace compared to DB filters, so that which 
>> database will be appropriate to again this. 
>> Selection list is as, MySQL, .. BDB...
>
> As I said, most peole needing performance in this kind of applications selects 
> Berkeley DB.

We are having two way of processing to get filter rules from DB as,

1.strtokFile() <-- reads filter rules from DB file. (acl test urlpath_regex -i 
"/etc/database.db")
Processed filter rules are stored in system memory with splay tree's data 
sturcture / linked structures.
Reads stored filter rules from system memory and process client requests.

It requires, Automatic updation on DB needed. Reconfiguration of squid 
will keep new changes.

2.   strtokFile() <--- reads filter rules from FLAT file (/etc/urlsites)  ( acl 
test urlpath_regex -i "/etc/urlsites" )
Processed filter rules are then moved into a database as CONTIGUOUS manner 
( Marshelling on BDB ).
Reads stored filter rules from DB, store into the system memory 
(UnMarshelling on BDB)
Then process every client requests.

It requires, Automative updation of FLAT files so that DB changes will be 
modified based on it.
Reconfiguration of squid will keep new changes.

In this, which design will give performance differance.

Regards
Muthukumar.








---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.783 / Virus Database: 529 - Release Date: 10/25/2004 



Re: [squid-users] flat file parsing vs db filter rules parsing

2004-10-28 Thread Muthukumar

>
>> Will squid only parse 256 characters of filter rules in that file? what will 
>> happened when the pattern limit exceeds 256 length?
>
> This is the line limit, not the limit of the file.

Yes, what will happened, when the pattern line length increased more than 256? Is 
there any default settings in system to limit a 
line length with 256.
Do we have a need to increase Line length when we use pattern more than 256?

How Did developers select 256 characters for maximum length to a line ?

>> We are needing your another guide on selecting Database. Which will be good to use? 
>> We are now progress with MySQL.
>
> A local MySQL database may be fine, but most applications doing things like this 
> selects to use a Berkerly DB file..

We are trying to attain good performace compared to DB filters, so that which database 
will be appropriate to again this.
Selection list is as, MySQL, .. BDB...

Can you prefer fastest and efficient DB?

Thanks  for your help.

Regards
Muthukumar.


---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.783 / Virus Database: 529 - Release Date: 10/25/2004 



Re: [squid-users] flat file parsing vs db filter rules parsing

2004-10-27 Thread Muthukumar

>> If you are doing this inside Squid then whatever you do should fulfill the 
>> non-blocking property. You do not want Squid to stop
>> processing requests only because it is waiting for an response from the DB system.
>
> We have started analysis on making flat file filter rules to DB Based.
>
> Our Objective is to make as,
>acl   "FLAT FILE" --> acl   "DB"
>
> We are planning to use MySQL as DB, because of good API support to coding.
>
> How the flat file "FLAT FILE" filter rules are parsed and stored in linked list 
> structure? I have tried to start squid with 
> debug_options ALL,9 to get some useful informations regarding flat file parsing, But 
> FLATFILE parsing is not being in debug output 
> as,
>
> grep 'testsite' cache.log
> 2004/10/28 09:34:29| aclMatchAcl: checking 'acl site dstdomain -i 
> "/usr/local/squidsauth/etc/testsite"'
> 2004/10/28 09:34:29| aclMatchAcl: checking 'acl site dstdomain -i 
> "/usr/local/squidsauth/etc/testsite"'
>
> Which file parses FLATFILE details and storing into system memory on squid?

I got the source file and function as,

cache_cf.c / strtokFile()

I am Having doubts on strtokFile() code as,

strtokFile
LOCAL_ARRAY(char, buf, 256);

/* fromFile */
   if (fgets(buf, 256, wordFile) == NULL) {
/* stop reading from file */

Will squid only parse 256 characters of filter rules in that file? what will happened 
when the pattern limit exceeds 256 length?

Is it need to raise this buffer range of 256 more when try to make filtering with DB 
based?

We are needing your another guide on selecting Database. Which will be good to use? We 
are now progress with MySQL.

Thanks for your inputs.

Regards
Muthukumar.

===


---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.783 / Virus Database: 529 - Release Date: 10/25/2004 



Re: [squid-users] flat file parsing vs db filter rules parsing

2004-10-27 Thread Muthukumar
> If you are doing this inside Squid then whatever you do should fulfill the 
> non-blocking property. You do not want Squid to stop 
> processing requests only because it is waiting for an response from the DB system.

We have started analysis on making flat file filter rules to DB Based.

Our Objective is to make as,
acl   "FLAT FILE" --> acl   "DB"

We are planning to use MySQL as DB, because of good API support to coding.

How the flat file "FLAT FILE" filter rules are parsed and stored in linked list 
structure? I have tried to start squid with 
debug_options ALL,9 to get some useful informations regarding flat file parsing, But 
FLATFILE parsing is not being in debug output 
as,

grep 'testsite' cache.log
2004/10/28 09:34:29| aclMatchAcl: checking 'acl site dstdomain -i 
"/usr/local/squidsauth/etc/testsite"'
2004/10/28 09:34:29| aclMatchAcl: checking 'acl site dstdomain -i 
"/usr/local/squidsauth/etc/testsite"'

Which file parses FLATFILE details and storing into system memory on squid?

Thanks for your validation and inputs.

Regards
Muthukumar.


---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.783 / Virus Database: 529 - Release Date: 10/25/2004 



Re: [squid-users] flat file parsing vs db filter rules parsing

2004-10-25 Thread Muthukumar
Hello Henrik,

Thanks once again for your reply on this.

>> I heard Performance and parsing time using db based are fast. Can we adopt db based 
>> filter rules parsing on squid-2.5 series 
>> without using any redirectors there. How the squid-3.0 adaptation will be differed 
>> from now?
>
> The difference is the parsing time. The lookup time is the same. When using a db 
> based filter parsing is done when building the 
> db, when using a flat file parsing is done when reading the configuration.
>
> lookup time is determined mainly by the type of acl, not how it is stored.

So look up time to get the type of ACL is same.
But the Regex pattern's and information strings related to acl filters parsing time is 
the difference factor.

>> Can you prefer, how we can know filter rules parsing with flat files or db based 
>> conceptually? We are on-going with source files 
>> of squid's *cf* .c and .h files. We are on the analysis to improve squid filter 
>> rules parsing and filter adapation.
>
> squid only have the flat file approach to acl specifications, parsing the whole acl 
> each time the configuration file is read, 
> storing the parsed result in memory for optimal lookup time. It should be noted that 
> when the acl is parsed it is no longer a flat 
> file but using other structures (depending on the acl type).

So squid is doing filters parsing with FLAT files.
Is squid developement team having an idea to deploy DB based FILTER pattern parsing & 
Recognition rules as like squidGuard?

> squidGuard have the option to select db based or flat file. As said earlier the 
> lookup performance is identical, but the startup 
> performance (parsing) is significantly different for very large lists.

We are on the analysis, to deploy DB based filter parsing with SQUID. Is it good and 
efficient based on parsing time performance.?
Does any one had analysed DB filter parsing comparision with FLAT File parsing?

Is squid 3.0 Series going to support DB Based filters as like squidGuard. (or) Going 
to give more supportivity to filter redirectors 
to get varied on performace for very large lists of filters?

Regards
Muthukumar.


---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.779 / Virus Database: 526 - Release Date: 10/19/2004 



Re: [squid-users] flat file parsing vs db filter rules parsing

2004-10-25 Thread Muthukumar

Hello Hendrik,

Thanks for detailed explanation. I need some more queries as inlined as,

>> We can parse and make filter rules with flat file manner ( squid configuration file 
>> parsing ) and database oriented parsing
>> and make filter rules ( squidguard ). Is it correct? what is the difference between 
>> these? Is there any performance, time rate 
>> and
>> difference factors between them?
>
> It depends very much on what kind of acls you are using. This is true for both Squid 
> and SquidGuard.

Can you prefer,  which ACL types are being used as flat file parsing based and DB 
filter rules parsing based. How are the parsing of 
filter rules are differed in implementation. All configurations are parsed from 
squid.conf flat file there. where are we using logic 
of db based parsing on squid.?

> What can be said about performance is that regex lists is bad and that redirectors 
> (such as SquidGuard) is seriously penalized by 
> the redirector interface of Squid making them unfeasible in larger setups, at least 
> until Squid-3.0 is released and the 
> redirectors have been adapted to the new interface available there.

I heard Performance and parsing time using db based are fast. Can we adopt db based 
filter rules parsing on squid-2.5 series without 
using any redirectors there. How the squid-3.0 adaptation will be differed from now?

Can you prefer,  how we can know filter rules parsing with flat files or db based 
conceptually? We are on-going with source files of 
squid's *cf* .c and .h files. We are on the analysis to improve squid filter rules 
parsing and filter adapation.

Thanks for your information and time.

Regards
Muthukumar.


---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.779 / Virus Database: 526 - Release Date: 10/19/2004 



flat file parsing vs db filter rules parsing

2004-10-25 Thread Muthukumar
Hello All,

  We can parse and make filter rules with flat file manner ( squid configuration 
file parsing ) and database oriented parsing 
and make filter rules ( squidguard ). Is it correct? what is the difference between 
these? Is there any performance, time rate and 
difference factors between them?

  Can we get parsing configuration files on flat file and D/B's from any documents?
  Currently, We are on-progress with *cf* source files. It is good to know about 
difference between those.

  Thanks in advance for your sharing.

Regards
Muthukumar.




---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.779 / Virus Database: 526 - Release Date: 10/19/2004 



Re: SSO identification on Squid

2004-06-22 Thread Muthukumar

> I am writing to you as a Nufw developper. Nufw is, shortly said, a
> users-aware firewall, released on GPL v2. Basically, it marks any (TCP and
> others) connections with a user id. This leads to (hopefully) interesting
> perspectives in terms of transparent users identification/authentication.
> Right now, an apache module exists, which lets users be identified to an
> Apache server, without any interactive login/password prompt.

Specified perspective is good one ,because only one authentication of NTLM is used to 
the trasparent users identfication /
authentication.
Trasparent users identfication / authentication can be used on the environment where 
we are using DHCP for allocating IP's and with
multiple subnet where we can not use the MAC address to control the http connection.

Requirement as like in the thread,
http://www.mail-archive.com/[EMAIL PROTECTED]/msg17988.html


> More details about the nufw project can be found at www.nufw.org.
>
> Anyway, this email is not about Nufw, sorry about this too long introduction.

Good introduction needs to be there to understand and to get a lot of squid 
developments.

> In a view to create a SSO authentication solution (based on nufw) for
> Squid, we need to build an authentication module for squid. It needs the
> following informations from squid : (source IP, source Port, destination
> IP, destination port), all these about the connection from the
> browser/client to the Squid server.

Authentication modules has to be configured depends upon the users requirement. It has 
to be given with the specified informations
as like NCSA auth method as,
password file which contains   It is started from squid,it does 
not get the client informations from squid and
it has to give the current user informations to Squid.

> In the nufw point of view, user should not be prompted with
> username/password (or maybe in a second period, if user cannot be
> identified through Nufw).

Olny NTLM is doing this one. Check there in,
http://devel.squid-cache.org/ntlm/

On NTLM , the web access is done as,
internet
  /
 client --->  squid <--> NTLM auth module
  \
   / 
 
apache-server

> I have read this thread :
>
> http://www.mail-archive.com/[EMAIL PROTECTED]/msg01881.html
>
> which is about the source IP address, so I suppose this should be possible.

Requirement is,authentication made to be done as.,
   

I am not sure,how it can be adoptable for Transparent Authentication.

Regards,
Muthukumar.




---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.708 / Virus Database: 464 - Release Date: 6/18/2004



reverse proxy handling

2004-04-07 Thread Muthukumar
Hello All,

A setup as online-server which gives dynamic content to the 
squid-accelerator.Squid seves the requests to multiple clients.
By defualt squid won't cache the dynamic content.

Is there a way to cache this by using the "vary_ignore_expire on" setting and 
changing the refresh_pattern to consider only
for example .php files as
refresh_pattern \.php$ 5 20% 10
to check the freshness of the object.If it is 5-10 minutes then get the same 
object again and serve the client requests.
Is it possible. Else is there a way to do this.

Regards,
Muthukumar.







---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.651 / Virus Database: 417 - Release Date: 4/5/2004



Re: squid-2.5 to squid-3.0 porting

2004-04-06 Thread Muthukumar
> On Sun, 4 Apr 2004, Henrik Nordstrom wrote:
>
> > The problems with Bug #856 was also only trivial confligts due to
> > unrelated fragments from other patches. Resolved, but unfortunately I can
> > not reach squid-cache.org to commit the changes right now..
>
> Now committed.

Hai Henrick,

Thanks for the appriciation to the work of patch porting.

> Now nearly all the patches you ported has been committed to 3.0.
> Only the following is left due to unresolved conflicts:
>
> Bug #849: DNS log error messages should report the failed query
>
I have gone through the patch(Bug #849) with other patches. I did not find the 
relatives to the others.
Bug #799 informations are rewritten on Bug #849. There is no relativeness with other
patches.

>The position of porting Squid-2.5 patches to Squid-3.0 is always there,
>but I assume you would like some more interesting tasks? Does any of hte
>projects at http://devel.squid-cache.org/ look appealing to you? Or do you
>have any other ideas you'd like to investigate?

I have gone through the site.I have seen these,
a) I hope ETag is written only for squid_2.5.Is there a support of ETag to squid-3.0
b) Is there a weightage for Squid Net I/O Performance Project.It is too developed on 
squid_2.5.
c) Is WCCP Version 2.0 Support available on squid-3.0.It is too in squid_2.5
d) IPv6

Is there a way to check the SSL enabled requests on Squid!
what is the SSLcertification and key transmission and all.How they are managed.

I don't know the effectiveness of above on the squid.It is good to have your guidance 
and suggestions
to go through in it and do some good works for Squid.

Regards,
Muthukumar.


---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.648 / Virus Database: 415 - Release Date: 3/31/2004



squid-2.5 to squid-3.0 porting

2004-04-04 Thread Muthukumar
Hai Henrick,

Your squid-2.5 patches are ported to 3.0 version. I hope you have commited some of 
the patches in squid-3.0 CVS.

BUG 862 patch problem happened because of more changes with the previous bug patch 
changes.
I have modified it and attatched in the bugzilla.

I  hope,I have done my introductory developement work :-)


Regards,
Muthukumar.






---
===  It is a "Virus Free Mail" ===
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.648 / Virus Database: 415 - Release Date: 3/31/2004



Re: Selectively closing connections. Let's make a patch!

2004-04-01 Thread Muthukumar


> . Do you mean that i can close connections without new TAGs?
> If i write this in squid.conf:
> *
> acl manager proto cache_objects
> acl bad_ip src 1.2.3.4
>
> http_access deny proto cache_objects bad_ip
> *

I think i went for wrong option of manager acl.. forgot the manager acl for this 
requirement.
I have tested for this requirement.

Check this setting:

acl test src client-ip-address/32

Before http_access allow all give as
http_access deny test

Test:

Comment as
 #http_access deny test
 http_access allow all

Browse any url's

Test with
http_access deny test
http_access allow all

squid -k reconfigure
Now you will get denial message!! So you denied that access to that ACL

So the need for the close_connections TAG is not needed at this point.

> Will it close connections for bad_ip during reconfigure? Because even "http_access 
> deny bad_ip" - do not close connections for
bad_ip!

For other acl's we have to block the GET method too..

acl ban1 dstdom_regex .google.com
acl get method GET
#http_access deny get ban1
http_access allow all

We can remove the access by
acl ban1 dstdom_regex .google.com
acl get method GET
http_access deny get ban1
http_access allow all
squid -k reconfigure

I hope your requirement is fully related to the acl setting ,methods and http_access 
setting at all.

Regards,
Muthukumar.




Re: Selectively closing connections. Let's make a patch!

2004-03-31 Thread Muthukumar

> I need help. I want to make some code changes in squid (write a pacth for squid).
> I want to make possible to close active connections SELECTIVELY in squid.

It is good to know.

> For example:
> in squid.conf add new TAG
>   close_connections ACL
> or
>   close_connections allow|deny ACL
>
> then during squid -k reconfigure active connections for this ACL must be closed!

Ok lets go with some examples.

acl ip src 172.16.1.198

http_access allow ip


design for close_connections may be like

Regarding to your requirement of closing the connection of acl then,
close_connections allow | deny ip

if you specify as
close_conntections allow ip
then
you have to make the environment as
http_access allow ip


if
close_conntections deny ip
then
you have to make the environment as
http_access deny ip


It must be done with very high attention on the acl rules.You are going to change the 
access
environment

> Please give me some information about where sould i look in squid/src to make this 
> code changes;
> i need to know mechanism of keeping connections (such as method CONNECT) alive
> during squid reconfigure and how to close them selectively.

we are having some objects in the cache ,After squid reconfigure , acl of
acl manager proto cache_object
must be denied to the access of denied close_connection acl.

If you are using as close_connections deny  then
manager acl must be denied to that acl ..

> I think this feature (selectively closing connections) is very usefull for all.

Detailful and analysed design from all will make this as very much usefull.

Regards,
Muthukumar.




bug 860 port problem

2004-03-28 Thread Muthukumar
Hello Henrick,

 For the porting of http://www.squid-cache.org/bugs/show_bug.cgi?id=860 bug 
860 to squid 3.0,
I had a problem.

Is there a definition for aclNBCheck() function anywhere in the src/ .

=== Reference ==
grep aclNBCheck *.cc
ACLChecklist.cc: * B) Using aclNBCheck() and callbacks: The caller creates the
ACLChecklist.cc: *aclNBCheck().  Control eventually passes to 
ACLChecklist::checkCallback(),
ACLChecklist.cc: *original caller of aclNBCheck().  This callback function must
client_side_request.cc: aclNBCheck(context->acl_checklist, 
clientRedirectAccessCheckDone, http);

grep aclNBCheck *.h
...
=

Thanks,
Muthukumar.



Multiple squid instance

2004-03-28 Thread Muthukumar
Hello Henrick,

I need some guidance on the multiple squid instance on the same machine .I need to 
know about the performace of them too.

Type 1
   Single Machine
---> squid listents 3120(using separate cache)
> squid listenst some other port   (using separate cache)
   Here two squid's are running on the same machine with separate cache's

Type 2
Two squid's are running on the same machine but using one common cache 
directory.

Type 3:
  a.  Mutliple squid's are running on the same machine but using the commong 
cache directory
  b.  Mutliple squid's are running on the same machine but using the 
individual cache directory

Type 4:
Single machine
--> squid for proxy
> squid for reverse proxy(both using same cache | separate 
cache)

Which situation ,squid will be better when we are using multiple instances on a 
same machine.

Regards,
Muthukumar.




Re: About user-level connection tracking mechanism: uselect()

2004-02-25 Thread Muthukumar

Subject: Re: About user-level connection tracking mechanism: uselect()


> On Wed, 25 Feb 2004, Xia Hongtao wrote:
>
> > Had anyone heard about uselect()? uselect() is also an interface
> > for web applications to improve the performance like epoll().It provide
> > a user-level connection state tracking mechanism. Kernel and web
> > applications share a piece of memory.There are some fd_sets in this
> > shared memory.Each time the socket is ready to be read or write, the
> > relative bit in shared memory will be set a flag.Most work of uselect()
> > is just check these shared memory, without syscall and context switch,
> > without fd_set copy.When there is no ready sockets,uselect() will block
> > until any of them ready.
>
> I have not heard of it before. Sounds interesting. Where can I find more
> information about uselect?

uselect:

http://www.research.ibm.com/compsci/spotlight/web/papers.html
"Kernel Support for Faster Web Proxies,"
Marcel Rosu, Daniela Rosu, Proceedings of the 2003 USENIX Annual Technical Conference 
(USENIX 2003)

Paper
http://pollux.usc.edu/~cs558/papers/
http://www.usenix.org/events/usenix03/tech/rosu.html

>
> Has there been any studies in how uselect positions itself in relation to
> epoll/kpoll when the number of filedescriptors grows?
>
> > The main problem to use this interface at squid-2.5 is: I do not know,
> > in the original comm_select() loop, how many kinds of fd need to be
> > checked by select()? Currently I see these: filesystem fds(for log
> > files), TCP sockets,UDP sockets,pipes(for aio). My uselect currently can

Regards,
Muthukumar.




Porting 2.5 to 3.0

2004-02-24 Thread Muthukumar

Hello Henrick,

Porting patch from 2.5 to squid-3.0-PRE3-20040223 is done for Bug 14,Bug 
571 and Bug 753.
Source Compilation on squid-3.0-PRE3-20040223 is successfully done for 
those patches.

Regards,
Muthukumar.



begin 666 bug_753.patch
M+2TM(&1I9F9?6]U(&[EMAIL PROTECTED])E('[EMAIL PROTECTED]('1H:7,@*B\**R-I
M9B!D969I;F5D*$A!5D5?0T].1DE'[EMAIL PROTECTED]"BLC:6YC;'5D92 [EMAIL PROTECTED]
M"BLC96YD:68*( HK(W5N9&5F(%9!7T-/4%D**R-I9B!D969I;F5D($A!5D5?
M5D%?0T]060HK(V1E9FEN92!605]#3U!9('9A7V-O<'D**R-E;&EF(&1E9FEN
[EMAIL PROTECTED]/4%D**R-D969I;[EMAIL PROTECTED]<'D*
M*R-E;[EMAIL PROTECTED]@"B C:6YC;'5D92 B'1E;F1S(&)[EMAIL PROTECTED]
M('9O:60*(&UE;4)U9E90PHK(" @('9A7VQI<[EMAIL PROTECTED] ["0H@(" @
M(&EN="!S>B ](# ["B @(" @87-S97)T*&UB("8F(&9M="D["B @(" @87-S
M97)T*&UB+3YB=68I.PI 0" M,C0T+#<@*S(U-2PQ-B! 0 H@(" @('=H:6QE
M("AM8BT^8V%P86-I='D@/#T@;6(M/FUA>%]C87!A8VET>2D@>PH@(" @(" @
M("!M8E]S:[EMAIL PROTECTED])E95]S<&%C92 ](&UB+3YC87!A8VET>2 M(&UB+3YS
M:7IE.PH@(" @(" @(" [EMAIL PROTECTED],@;75C:"!A<[EMAIL PROTECTED]
M"2-I9B!D969I;F5D(%9!7T-/4%D**PE605]#3U!9*&%P+'9A<[EMAIL PROTECTED]@
M1FEX(&]F(&)U9R W-3-R+B!4:&[EMAIL PROTECTED]@;[EMAIL PROTECTED],@:7,@=6YD969I
M;F5D"BL@(" @(" @(" @(" @(" @(" @(" @(" @(" J(&%F=&5R('9S;G!R
M:[EMAIL PROTECTED](')E='5R;G,N($UA:[EMAIL PROTECTED];W!Y(&]F('9AB ]('9S;G!R:6YT9BAM8BT^8G5F("L@;6(M/G-I
M>F4L(&9R965?[EMAIL PROTECTED]@3TB;F\B*0HK*0HK:[EMAIL PROTECTED]&5S=" B
M)&%C7V-V7V9U;F-?=F%?8V]P>2(@/2 B>65S(B [('1H96X**R @04-?1$5&
M24Y%*$A!5D5?5D%?0T]062D**V9I"BL**V1N; HK9&YL(%-O;64@0HK9&YL"BM!0U]#04-(15]#2$5#2RAI9B!?
M7W9A7V-O<'D@:7,@:6UP;&5M96YT960L(&%C7V-V7V9U;F-?7U]V85]C;W!Y
M+ HK("!!0U]44EE?4E5.*%L**R @(" @("-I;F-L=61E(#QS=&1APHK(" @(" @(" @=F%?;&ES
M="!A2(@/2 B>65S(B [
M('1H96X**R @04-?1$5&24Y%*$A!5D5?7U]605]#3U!9*0HK9FD*(" @"B!D
M;[EMAIL PROTECTED] M1FEL=&5R('-U<'!O2!H87,@;VYE(&%D9')E6]U(&1O;B=T"B )=VES:"!T
M;R!U<[EMAIL PROTECTED]"P@6]U
M('=A;[EMAIL PROTECTED]:[EMAIL PROTECTED]&[EMAIL 
PROTECTED]('!A6YA;64H8W,M/FAO'0N(%1H92!L87-T(")D:7)E8W0B(&5N=')Y(&ES(')E=')I960@
M;75L=&EP;&[EMAIL PROTECTED]&EM97,@*B\*(" @(" @(" @(" @(" @("!F=V13=&%T92T^
MPHM(" @(" @("!D96)[EMAIL PROTECTED]@[EMAIL PROTECTED]")P965R
M1&EG97-T3&]O:W5P.B!UPH@(" @(" @("!D96)[EMAIL PROTECTED]@[EMAIL PROTECTED]")P965R1&EG97-T
M3&]O:W5P.B!N;W1E(&YE961<;B(I.PH@(" @(" @("!P965R1&EG97-T3F5E
M9&5D*' M/F1I9V5S="D["B @(" @(" @(')E='5R;B!,3T]+55!?3D].13L*
M+2 @("!](&5LPHK(" @('[EMAIL PROTECTED] H(7 M/F1I9V5S="T^9FQA
M9W,N=7-A8FQE*2!["B @(" @(" @(&1E8G5G*#$U+" U*2 H(G!E97)$:6=E
M2 F)B E2AP+"!R97%U97-T*2D@>PHK"61E8G5G
M*#$U+" U*2 H(G!E97)$:6=EPHM(" @(" @("!P965R4')O8F5#;VYN96-T*"AP965R("HI(' I.PHK(" @
M(" @("!I9B H(7!E97)0PHK
M(" @(&1E8G5G*#$U+" Q*2 H(E1#4"!C;VYN96-T:6]N('1O("5S+R5D(&9A
M:6QE9%QN(BP@<"T^:&]S="P@<"T^:'1T<%]P;W)T*3L**R @("!P965R0V]N
M;F5C=$9A:6QE9%-I;&5N="AP*3L**WT**PHK=F]I9 H@<&5EPI 
M0" M,3,X.2PT,B K,3,Y,"PT-B! 0 H@(" @(' M/G1C<%]U<" ](%!%15)?
M5$-07TU!1TE#7T-/[EMAIL PROTECTED]@"[EMAIL PROTECTED]PH@(" @(&EN="!F9#L*+0HK(" @('1I;65?="!C=&EM
M96]U=" ](' M/F-O;FYE8W1?=&EM96]U=" ^(# @/R!P+3YC;VYN96-T7W1I
M;65O=70**R @(" @(" @(" @(" @(" @(" @(" @([EMAIL PROTECTED];65O
M=70N<&5E2!R=6YN:6YG("HO"BT*+2 @("!I9B H
M2!R=6YN:6YG("HO
M"BL@(" @:[EMAIL PROTECTED]'-Q=6ED7V-UPHM(" @(" @("!P
M965R0V]N;F5C=$9A:6QE9"AP*3L**R @(" @(" @<&5E2YC8PDR,# S+3 Y+3 Q(# Y.C$Y.C,X+C P,# P,# P
M," K,#4S, HK*RL@2!U;[EMAIL PROTECTED]"!C86XG="!K965P+6%L:79E7&XB*3L*
M*PER97%U97-T+3YF;&%G5]K965P86QI=F4@/2 P.PHK(" @('T*
M(" @(" O*B!!<'!E;[EMAIL PROTECTED]("HO"B @(" @>PH@(" @(" @("!,3T-!3%]!
M4E)!62AC:&%R+"!B8G5F+"!-05A?55),("L@,S([EMAIL PROTECTED]&EF9E]S[EMAIL PROTECTED](3U-43D%-14Q%3B K(#$P*3L*(" @("!,3T-!3%]!
M4E)!62AC:&%R+"!D97-C+"!&1%]$15-#7U-:*3L*( HM(" @(&EF("AF9$Y&
MPH@(" @(" @("!D96)[EMAIL PROTECTED]"P@,[EMAIL 
PROTECTED]")P8V]N;E!U<[EMAIL PROTECTED]
M($YO="!M86YY('5N=7-E9"!&1'-<;B(I.PH@(" @(" @("!C;VUM7V-L;W-E
M*&9D*3L*(" @(" @(" @

Re: [squid-users] Re: users-authentication using certificate?

2004-02-23 Thread Muthukumar

> 
> >  I'm using squid for users-authentication by
> > username/password. Can I using certificate for
> > users-authentication, does squid have support this?
> 
> This is supported in Squid-3.0 for accelerator type setups using the 
> https_port directive and the certificate related acls.
> 
> Actually it is even supported for proxy operation using https_port, but to
> our knowledge there is no browsers supporting the use of SSL connections
> to the proxy. 
> The use of SSL is a requirement for certificate based 
> authentication as certificates is a property of SSL, so until there is
> browsers implementing SSL to proxies such authentication is not possible
> for proxied Internet requests.

Henrick,

SSL connections proxy is supported at Netscape 7.1 browser.

Regards,
Muthukumar.



Squid Development

2004-02-23 Thread Muthukumar

Hello Developers,

I am participating on the squid-users mailing list and being in the 
squid-dev lists.

I am having the wish to participate in the squid developments.I have gone 
through the
squid development and projects.Being in beginner stage,I may not take the 
decision to work 
for the paricular development.I need the guidance to do the best for the 
Squid Development.
As a initiation,I have sent a patch for the refresh_pattern to use them as 
fractional value 
in the default configuration of minutes.

Henrick guided me to forward this to all squid-developers to get lot of 
guidances to involve 
in the squid developments.

Regards,
Muthukumar.



refresh_pattern in fractions patch

2004-02-18 Thread Muthukumar
Hello All,

 To use the refresh_pattern in fractions,I have attatched a patch as 
refresh_fraction.patch
 The calculation changes are needed in the cache_cf.c for the float.

 I have tested for the refresh time as
 refresh_pattern .   0   20% 4320.12
  as well as 
 refresh_pattern .   0   20% 1.12
 
 Debug information on the cache.log is as
 2004/02/18 19:26:57| Refresh Values: min=0 max=259207 (max=4320.12)
 2004/02/18 19:21:28| Refresh Values: min=0 max=67 (max=1.12)

To test the calculation use this,
#include 
main()
{
 float i=4320.12,min=60.11,max=0.0; /*  4320.12 for the fration time */
 max = (int)(time_t)(i*60);
 printf("Max Refresh Time:%f\n",max);
 printf("Max Refresh Time:%d\n",(int)max);
}
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
refresh_patter.patch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
--- cache_cf_old.c  2004-02-18 19:34:53.0 +0530
+++ cache_cf.c  2004-02-18 19:38:12.0 +0530
@@ -218,6 +218,23 @@
 return i;
 }

+/*
+* Used to get the fractional or normal refresh pattern values
+*/
+
+float
+Getfloat(void)
+{
+char *token = strtok(NULL, w_space);
+float i;
+if (token == NULL)
+self_destruct();
+if (sscanf(token, "%f", &i) != 1)
+self_destruct();
+return i;
+}
+
+
 static void
 update_maxobjsize(void)
 {
@@ -1816,6 +1833,7 @@
 int ignore_reload = 0;
 #endif
 int i;
+float j;
 refresh_t *t;
 regex_t comp;
 int errcode;
@@ -1832,12 +1850,12 @@
 if (token == NULL)
self_destruct();
 pattern = xstrdup(token);
-i = GetInteger();  /* token: min */
-min = (time_t) (i * 60);   /* convert minutes to seconds */
+  j = Getfloat(); /* token: min */
+min = (int)(time_t) (j * 60);  /* convert minutes to seconds */
 i = GetInteger();  /* token: pct */
 pct = (double) i / 100.0;
-i = GetInteger();  /* token: max */
-max = (time_t) (i * 60);   /* convert minutes to seconds */
+ j = Getfloat();  /* token: max */
+max = (int)(time_t) (j * 60);  /* convert minutes to seconds */
 /* Options */
 while ((token = strtok(NULL, w_space)) != NULL) {
 #if HTTP_VIOLATIONS
@@ -1875,6 +1893,7 @@
 t->min = min;
 t->pct = pct;
 t->max = max;
+   debug(22, 1) ("Refresh Values: min=%d max=%d\n",(int)min,(int)max);
 if (flags & REG_ICASE)
t->flags.icase = 1;
 #if HTTP_VIOLATIONS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Review is needed.

Regards,
Muthukumar.


Re: Squid As A Non-Caching Reverse Proxy/Web Accelerator?

2004-02-15 Thread Muthukumar
> 
> I am thinking of solutions for minimizing apache's
> memory use on a small memory server in the presence of
> several slow clients or long-running http requests
> (large downloads) and with about 50% of the requested
> pages being dynamic.

If You go for Reverse proxy method,Dynamic contents such as cgi scripts and 
Active Server Pages cannot be cached.Web server will be freed to handle other 
Dynamic pages.It is used to cache the static contents only!

>
> I need a reverse proxy server that can buffer output
> from apache so that I won't need many active apache
> processes to be able to serve slow clients, and I'm
> considering squid with caching disabled.

Process on Reverse proxy is occuring as,
When a client makes a HTTP request,it will be directed to the 
reverse proxy machine not to the actual web-server.If the requested 
content is there,it will be given to it. Else the content is retrieved from the 
acutal web server and serves the client request.

> 
> I'd like to know how squid, in reverse proxy
> mode, handles a situation where the origin server is
> very fast but the client it's serving is slow. 

Reverse proxy setup is used for the situtation where the web-server load is high and 
slow.

Does
> it buffer the server's response and allow it to close
> the connection quickly and serve other processes? 

Yes.

Is
> there an architecture document somewhere that _fully_
> answers my question?

Yes.
http://squid.visolve.com/white_papers/reverseproxy.htm

Regards,
Muthukumar.



Re: Squid and WCCP v2 support

2004-01-29 Thread Muthukumar

> i am not sure how to take it further, it needs a setting in the config file, and the 
> wccp draft (which i think has expired already) from cisco calls that it is possible 
> to negptiate the forwarding method.  i was unable to contact whomever worked on the 
> wccpv2 code... 
>  
> can you guys point me to the prper person and advice is wlecome... 
Ok.
Check this out for WCCP 2.0 Implementation in Squid 2.5
http://squid.visolve.com/

Regards,
Muthukumar.


Analyse and rewrite the HEADER

2003-12-03 Thread MUTHUKUMAR KANDASAMY
Hello All,

I hope you can help me to do this:

I have an in incoming regular POST method in the form /admin/ with a special 
header 'x-myheader:', which needs to be analyzed and rewritten,
Then the POST is passed on to the original server on a different port (in 
it's original form + changed x-myheader) where the connection is maintained 
until
a response is retrieved and passed back to the original request.

Request comes in to Squid or some other Proxy:

   frontend.mydomain.com:80
   POST > /admin/
   HTTP/1.1
 Host: backend1.mydomain.com
 Content-Length: 555
 Content-Type: application/vnd.*
 x-myheader: 12345
 Connection: Keep-Alive
 
Connection is held open while we do a call-out procedure rewrites
   x-myheader: value
and request goes back out to intended recipient
  frontend.mydomain.com:8080
  POST > /admin/
 HTTP/1.1
 Host: backend1.mydomain.com
 Content-Length: 555
 Content-Type: application/vnd.*
 x-myheader: 54321
 Connection: Keep-Alive
 
   Response goes back to requester

   HTTP/1.1 200 OK
   Content-Type: application/vnd.*
   Content-Length: 74
   Connection: Keep-Alive
   Keep-Alive: timeout=15, max=100
   Your message has been recieved, thank you!
  How can we do this using SQUID.
  Thanks in advance for your responses.
Thanks,
Muthukumar.
_
Love shopping online? Get this e credit card. 
http://server1.msn.co.in/features/amex/ Save cost, add value!



[squid-dev]Epoll Squid Test on Ia64

2003-09-05 Thread MUTHUKUMAR KANDASAMY
Hello all ..,

  Thank you Developers for the nice Reply to me.

I have tuned the File-max to 32768 and changed the cache_mem 64 MB.
Then I have changed the kernel parameters as
net.ipv4.ipfrag_low_thresh = 196608
net.ipv4.ipfrag_high_thresh = 262144
net.ipv4.ipfrag_time = 45
net.ipv4.tcp_rmem = 4096  87380 174760
net.ipv4.tcp_wmem = 4096  16384 131072
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 4096
net.ipv4.neigh.default.gc_thresh3 = 8192
net.core.rmem_max = 65535
net.core.rmem_default = 65535
net.core.wmem_max = 65535
net.core.wmem_default = 65535
But this time also I am getting some messages in /var/log/messages as

Sep  5 17:20:53 pandia squid[1440]: Squid Parent: child process 1442 started
Sep  5 17:21:05 pandia kernel: squid(1442): unaligned access to 
0x20e4bff4, ip=0x2031bb50
Sep  5 17:21:05 pandia kernel: squid(1442): unaligned access to 
0x20e4bfec, ip=0x2031bbc0
Sep  5 17:21:05 pandia kernel: squid(1442): unaligned access to 
0x20e63ff4, ip=0x2031bb50
Sep  5 17:21:05 pandia kernel: squid(1442): unaligned access to 
0x20e4bf
Sep  5 17:21:26 pandia kernel: squid(1449): unaligned access to 
0x20cdbf
Sep  5 17:21:26 pandia kernel: squid(1449): unaligned access to 
0x20cdbf
Sep  5 17:21:26 pandia kernel: squid(1449): unaligned access to 
0x20cdbf
Sep  5 17:21:26 pandia kernel: squid(1449): unaligned access to 
0x20ce3f
Sep  5 17:21:28 pandia squid[1452]: Squid Parent: child process 1454 started
Sep  5 17:24:29 pandia squid[1452]: Squid Parent: child process 1454 exited 
with  status 255

I am getting so many "Squid Parent:child process 1454 exited with status 255 
and etc."  in the /var/log/message.

In the first while using the parameters as
net.ipv4.neigh.default.gc_thresh1 = 128
net.ipv4.neigh.default.gc_thresh2 = 512
net.ipv4.neigh.default.gc_thresh3 = 1024
I got the messages as Squid Parent: Child exited due to signal 6.
Then I have tuned that parameters.Then there is no messages like "exited due 
to signal 6" in the log.

So please give the correction on the Kerenl parameters in the above.

Squid Server
   Processor :  Dual Itanium 2  900MHz
   Memory: 2 GB
   OS: RedHat Advanced server 2.0
   Kernel:Kernel-2.4.20
   Squid:  squid-3.0.PRE3
Polygraph Server
   Processor :  P III  930MHz
   Memory: 512 MB
   OS: RedHat Linux-7.3 and RedHat 
Linux-8.0
   Kernel:Kernel-2.4.20
   Polygraph:   Polygraph-2.55
Query:
# Then another query that anybody tested the squid with epoll support on 
IA64 more than 300 requests,
Please suggest your memory consumption,kernel version and kernel parameters 
and kernel usage for testing.
# Anybody checked the squid-3.0 for epoll in kernel-2.6 kernel.
I have tried a lot to compile the kernel ,but because of assembler,compiler 
and linker,I couldn't make it.If anybody did that on IA64 then please give 
the info
about that also.

Caching the replies from Developers   ,



_
Hey there, NRIs! Send money home. 
http://server1.msn.co.in/msnleads/citibankrca/citibankrca2.asp?type=txt Use 
Citibank RCA



[squid-dev]Squid-3.0-PRE3 Compilation on Ia64

2003-08-19 Thread MUTHUKUMAR KANDASAMY
Hello Developers,

I am new to this list.I am involving in the Epoll development in Squid-3.0 
on Ia64 Platform.I have compiled the squid-3.0-pre3 with

configure options:

'--prefix=/usr/local/squidbug' '--enable-epoll' '--disable-poll' 
'--disable-select' '--disable-kqueue' '--enable-storeio=null,ufs,aufs' 
'--enable-async-io=16' '--with-file-descriptors=16384' '--with-pthreads'

for testing the epoll netio method on Squid in linux kernel-2.4.20.

The changes in the squid.conf
===
cache_dir null /dev/null,http_access allow all, cache_mem 1200 
MB,half_closed_clients off, server_persistent_connection off other then 
normal options.

The Squid-3.0-pre3 satisfies the requests upto 300 requests/sec in Polygraph 
testing.Beyond that limit,Squid-3.0 did not satisfied.The meomory 
consumption of Squid with epoll support exceeds 1.9GB.So the Polygraph 
entries are getting errors.

I have traced the squid-3.0 memory consumption with top entries.

Memory Usage:
==
11:38am  up  1:19,  3 users,  load average: 1.00, 0.94, 0.63
  43 processes: 41 sleeping, 2 running, 0 zombie, 0 stopped
 CPU0 states: 67.22% user, 31.9% system,  0.0% nice,  1.19% 
idle
 CPU1 states:  1.4% user,  9.16% system,  0.0% nice, 89.30% 
idle
 Mem:  2053824K av, 2028928K used,   24896K free,   0K 
shrd,2096K buff
 Swap: 2040208K av,   20112K used, 2020096K free
   23328K cached

PID USER PRI  NI  SIZERSS  SHARE   STAT %CPU  %MEM  
 TIMECOMMAND
1405 squid 16   0 1903M 1.9G  3520R   99.9  
 23.7 13:29 squid

PolyGraph Entry at client side:
===
   016.30| i-top1 254073 100.20  28097  61.87   0 3497
   016.38| i-top1 254073   0.00 -1  -1.000 3498
   016.47| i-top1 254073   0.00 -1  -1.000 3499
I think,Some of you involved in the compilation of squid-3.0 on IA64 
Platform.So I want to know whether,squid-3.0 with epoll netio consumes such 
memory for the requests and what is the reason that squid-3.0 consuming such 
huge memory for 350 requests/sec.Is there any possiblity for memory 
leakage.Specifically in comm.cc

void
comm_old_write(int fd, const char *buf, int size, CWCB * handler, void 
*handler_data, FREE * free_func) function.

Anyhelp regarding to this problem are appreciated.

Thanks
-Muthukumar
_
Polyphonic ringtones. Latest movie trailors. 
http://server1.msn.co.in/sp03/gprs/index.asp On your mobile!



[squid-dev]SQuid-3.0-PRE3 Compilation on Ia64

2003-08-19 Thread MUTHUKUMAR KANDASAMY
Hello Developers,

I am new to this list.I am involving in the Epoll development in Squid-3.0 
on Ia64 Platform.I have compiled the squid-3.0-pre3 with

configure options:

'--prefix=/usr/local/squidbug' '--enable-epoll' '--disable-poll' 
'--disable-select' '--disable-kqueue' '--enable-storeio=null,ufs,aufs' 
'--enable-async-io=16' '--with-file-descriptors=16384' '--with-pthreads'

for testing the epoll netio method on Squid in linux kernel-2.4.20.

The changes in the squid.conf
===
cache_dir null /dev/null,http_access allow all, cache_mem 1200 
MB,half_closed_clients off, server_persistent_connection off other then 
normal options.

The Squid-3.0-pre3 satisfies the requests upto 300 requests/sec in Polygraph 
testing.Beyond that limit,Squid-3.0 did not satisfied.The meomory 
consumption of Squid with epoll support exceeds 1.9GB.So the Polygraph 
entries are getting errors.

I have traced the squid-3.0 memory consumption with top entries.

Memory Usage:
==
11:38am  up  1:19,  3 users,  load average: 1.00, 0.94, 0.63
  43 processes: 41 sleeping, 2 running, 0 zombie, 0 stopped
 CPU0 states: 67.22% user, 31.9% system,  0.0% nice,  1.19% 
idle
 CPU1 states:  1.4% user,  9.16% system,  0.0% nice, 89.30% 
idle
 Mem:  2053824K av, 2028928K used,   24896K free,   0K 
shrd,2096K buff
 Swap: 2040208K av,   20112K used, 2020096K free
   23328K cached

PID USER PRI  NI  SIZERSS  SHARE   STAT %CPU  %MEM  
 TIMECOMMAND
1405 squid 16   0 1903M 1.9G  3520R   99.9  
 23.7 13:29 squid

PolyGraph Entry at client side:
===
   016.30| i-top1 254073 100.20  28097  61.87   0 3497
   016.38| i-top1 254073   0.00 -1  -1.000 3498
   016.47| i-top1 254073   0.00 -1  -1.000 3499
I think,Some of you involved in the compilation of squid-3.0 on IA64 
Platform.So I want to know whether,squid-3.0 with epoll netio consumes such 
memory for the requests and what is the reason that squid-3.0 consuming such 
huge memory for 350 requests/sec.Is there any possiblity for memory 
leakage.Specifically in comm.cc

void
comm_old_write(int fd, const char *buf, int size, CWCB * handler, void 
*handler_data, FREE * free_func) function.

Anyhelp regarding to this problem are appreciated.

Thanks
-Muthukumar
_
MSN Messenger V6.0. Give it a fun name. 
http://server1.msn.co.in/sp03/ilovemessenger/index.asp Win cool stuff!