Re: [squid-users] Need help

2008-03-05 Thread Preetish
To find out about the Performance of squid install cache manager.
To  monitor the surfing habits install sarg.

On Wed, Mar 5, 2008 at 1:09 PM, piyush joshi <[EMAIL PROTECTED]> wrote:
> Dear All,
>  Can anyone suggest me any free software to monitor squid
> which will show all information like CPU usage, Memory Usage, No of
> hite, IP address where from request is coming top users, Top sites,
> Top Bandwith . Please reply to me i will be grateful to you ..
>
> --
> Regards
>
> Piyush Joshi
> 9415414376
>


Re: [squid-users] anonymous proxying sites

2007-10-15 Thread Preetish
On 10/11/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 11, 2007, Thompson, Scott (WA) wrote:
> > Hi all
> > I was wondering if anyone knew a way to block access to anonymous
> > proxying sites. Some of our users have worked out how to bypass the
> > denied.list and as a result we have no logging as to their surfing
> > activity
> > If the only way to do it is via the denied.list does anyone have a good
> > list of sites that perform this function?
> > Some I know of are anonomyzer.com and blockedsiteaccess.com

There are Thousands of proxing sitesthem . The best way is to use
SquidGaurd. and keep adding the Proxies as u catch them.


Re: [squid-users] Block all Web Proxies with squid.

2007-09-04 Thread Preetish
On 9/5/07, Norman Noah <[EMAIL PROTECTED]> wrote:
> Well if u want to block proxy you can get the list from
>
> www.proxy.org.

But this list is paid.is there any free list or can someone send a an
attached text file of the list.Even i face the same Issue.May be we
can make it work with SquidGaurd.
>
> they have the updated list of all running proxies..
>
> y must u allow https not to go through squid ?
>
> in my environment all internet access must go through squid.
>


Re: [squid-users] File Descriptors causing an issue in OpenBSD

2007-08-10 Thread Preetish
Hi All,

  Recompilng the kernel with MAXFILES=8192 worked. I even had
to add the line :openfiles-max=infinity:\

to /etc/login.def in the daemon section. Well now the File Descriptors
has increased and even the internet speed is good ( i ll know it
better by tomorrow). I have kept my cache to 10 Gb right now.Thanks to
everyone :)

Cheers
Preetish


Re: [squid-users] File Descriptors causing an issue in OpenBSD

2007-08-09 Thread Preetish
> >Odd.. are you sure you are really running the new binary, and that the
> >ulimit setting is done correctly in the start script?

#Squid startup/shutdown

if [ -z $1 ] ; then
echo -n "Syntax is: $0 start stop"
exit
fi

if [ $1 != start -a $1 != stop ]; then
echo -n "Wrong command"
exit
fi

if [ -x /usr/local/sbin/squid ]; then
if [ $1 = 'start' ] ; then
echo -n 'Running Squid: ';ulimit -HSn 8192;
/usr/local/sbin/squid
else
echo -n 'Killing Squid: ';  /usr/local/sbin/squid
-k shutdown
fi
else
echo -n 'Squid not found'
fi


d> What do you get when you issue the following 2 commands:
> limits
No command limit.
> and
>
> ulimit -n

1024

> kern.maxfiles
> kern.maxfilesperproc

i did
sysctl -w  kern.maxfiles=8192
sysctl -w  kern.maxfilesperproc=8192 ---> this gives a error

Then i even made changes the Options in /etc/login.def
{{
default:\
:path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/local/bin:\
:umask=022:\
:datasize-max=512M:\
:datasize-cur=512M:\
:maxproc-max=512:\
:maxproc-cur=64:\
:openfiles-cur=8192:\
:stacksize-cur=4M:\
:localcipher=blowfish,6:\
:ypcipher=old:\
:tc=auth-defaults:\
:tc=auth-ftp-defaults:
}}

and

{{
daemon:\
:ignorenologin:\
:datasize=infinity:\
:maxproc=infinity:\
:openfiles-cur=8192:\
:stacksize-cur=8M:\
:localcipher=blowfish,8:\
:tc=default:
}}

and after doing all these changes i uninstalled squid completely and
all its file and everything .Then recompiled it and installed it
againBut DAMM it gave me the same number of file descriptors. So
now i have reduced the cache to 10 GB. I found a Squid Definitive
guide where he said to recompile the kernel after editing the kernel
configuration file .


Squid Object Cache: Version 2.6.STABLE13
Start Time: Thu, 09 Aug 2007 19:09:36 GMT
Current Time:   Thu, 09 Aug 2007 19:11:13 GMT
Connection information for squid:
Number of clients accessing cache:  321
Number of HTTP requests received:   2649
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   1638.4
Average ICP messages per minute since start:0.0
Select loop called: 34876 times, 2.782 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 15.1%, 60min: 15.1%
Byte Hit Ratios:5min: 29.4%, 60min: 29.4%
Request Memory Hit Ratios:  5min: 9.7%, 60min: 9.7%
Request Disk Hit Ratios:5min: 44.4%, 60min: 44.4%
Storage Swap size:  23806 KB
Storage Mem size:   2516 KB
Mean Object Size:   7.57 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.68577  0.68577
Cache Misses:  1.24267  1.24267
Cache Hits:0.00179  0.00179
Near Hits: 0.68577  0.68577
Not-Modified Replies:  0.00091  0.00091
DNS Lookups:   0.00190  0.00190
ICP Queries:   0.0  0.0


:(((

Preetish


[squid-users] File Descriptors causing an issue in OpenBSD

2007-08-09 Thread Preetish
Hi Everybody

I have recompiled Squid the way i saw in one of the how to. this is what i did

1)I uninstalled Squid
2)
#ulimit -HSn 8192
#then recompiled squid with --with-maxfd=8192
then in my starting squid script i have added ulimit -HSn 8192

But still it shows the same number of file descriptors
File descriptor usage for squid:
Maximum number of file descriptors:   1024
Largest file desc currently in use:939
Number of file desc currently in use:  929
Files queued for open:   1
Available number of file descriptors:   94
Reserved number of file descriptors:   100
Store Disk files open:  19
IO loop method: kqueue

There is something fishy about it coz my cache is only 1.1G . and
moreover there is a file squid.core in my /etc/squid and i do not
understand its porpose. i searched for it online but still i did
understand it. Is my squidclient giving me stale results. I had even
cleaned the cache before reinstalling squid. Is there some different
way to increase the file descriptors in OpenBSD. Kindly Help.

Regards
Preetish


Re: [squid-users] Squid too slow.Please Help.Urgent

2007-08-09 Thread Preetish
On 8/9/07, Francesco Perillo <[EMAIL PROTECTED]> wrote:
>
> Probably squidGuard can be of help

I read about it and will definately try it. thanks.

Well now the CPU utilization is in check and the internet speed is
better than before though not gr8. I will check the link speed tonite
again. Still there is another issue about file descriptors or which i
will mail using a new thread.

Thanks Guys

Cheers
Preetish
\m/O\m/~


Re: [squid-users] Squid too slow.Please Help.Urgent

2007-08-08 Thread Preetish
Hi Everybody,

>
> From your previous email, I suspect your squid CPU usage is that high
> due to url_regex acls (perhaps the acl files contain too many regexes
> to evaluate).  Try commenting them out, or simply making those files
> empty.

Gr8 ..:D.After doing this the cpu utilization has come down to around
3% to 4%. even my   Number of clients accessing cache:reached 1567
in no time and was still incerasing but Like Tek Bahadur said i ran
out of file descriptors :(. So i ll now recompile squid the way i just
saw on one of the mailing list and let you guys know what happened.

But the issue is my bl-porn.conf has only 3214 lines and i really need
them. Is there any other way i can do it without stressing the CPU ?

>>could you show the output of these commands?

>>unset http_proxy
>>time wget http://www.us.kernel.org/pub/linux/kernel/v2.6/linux-2.6.9.tar.gz
>>time wget http://www.in.kernel.org/pub/linux/kernel/v2.6/linux-2.6.9.tar.gz

Well i do not have wget installed on my proxy so i did a `ssh -D 1080
[EMAIL PROTECTED] -fN` from one of my servers . i had squid shutdown coz it was
giving a lot of warnings . This is what i get when i wget a file

48.8 Mb in 7min 46 sec --- from India (.83 Mbps)
48.8 Mb in 18min 47 sec  from US  (.34 Mbps)

So on a 4

When i am using my local DNS server this is the time it takes:
{{
time nslookup froogle.google.com
Server: 172.24.2.71
Address:172.24.2.71#53

Non-authoritative answer:
froogle.google.com  canonical name = froogle.l.google.com.
Name:   froogle.l.google.com
Address: 72.14.255.99
Name:   froogle.l.google.com
Address: 72.14.255.103
Name:   froogle.l.google.com
Address: 72.14.255.104
Name:   froogle.l.google.com
Address: 72.14.255.147

0m0.22s real 0m0.00s user 0m0.00s system

and for indiatimes
time nslookup www.indiatimes.com
Server: 172.24.2.71
Address:172.24.2.71#53

Non-authoritative answer:
www.indiatimes.com  canonical name = www.indiatimes.com.edgesuite.net.
www.indiatimes.com.edgesuite.netcanonical name = a1934.g.akamai.net.
Name:   a1934.g.akamai.net
Address: 203.199.74.25
Name:   a1934.g.akamai.net
Address: 203.199.74.11

0m0.19s real 0m0.00s user 0m0.01s system
}}

When i am using the DNS of my ISP it give :

{{

time nslookup www.fish.com
Server: 202.78.165.1
Address:202.78.165.1#53

Non-authoritative answer:
Name:   www.fish.com
Address: 12.44.249.13

0m1.31s real 0m0.00s user 0m0.01s system
time nslookup www.gmu.edu
Server: 202.78.165.1
Address:202.78.165.1#53

Non-authoritative answer:
www.gmu.edu canonical name = jiju.gmu.edu.
Name:   jiju.gmu.edu
Address: 129.174.1.52

0m0.71s real 0m0.01s user 0m0.00s system
}}

Well hope this information willbe of some use . I ll recompile squid
and let all of you know the stats. Thank all of you :).And please let
me know an effective way of using the url_regex.


Regards
Preetish


Re: [squid-users] Squid too slow.Please Help.Urgent

2007-08-08 Thread Preetish
/squid/custom/ppl/bhavans/vishwakarma.conf"
acl vyas src "/etc/squid/custom/ppl/bhavans/vyas.conf"

#Staff
acl staff src "/etc/squid/custom/ppl/staff.conf"

#IPC Staff
acl ipc src "/etc/squid/custom/ppl/ipc.conf"

#Other Administration
acl ipcstaff src "/etc/squid/custom/ppl/ipcstaff.conf"
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager all
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
# one who can access services on "localhost" is a local user
http_access deny to_localhost

http_access deny bl-virus
http_access deny bl-media
http_access deny bl-mime
http_access deny bl-porn
http_access deny bl-browser

http_access allow allowed

http_access allow meera
http_access allow budh
http_access allow ram
http_access allow ashok
http_access allow bhagirath
http_access allow gandhi
http_access allow krishna
http_access allow ranapratap
http_access allow shankar
http_access allow vishwakarma
http_access allow vyas
http_access allow malviya

http_access allow staff
http_access allow ipcstaff

# And finally deny all other access to this proxy
http_access deny all
http_reply_access allow all

#Allow ICP queries from everyone
icp_access allow all

reply_body_max_size 20971520 allow all

append_domain .xxx.xx.xx

>>Since your average number of connections for your squid box is just
>>about 700 per minute, you should investigate why your CPU usage is
>>unusually high. Squid-2.6.13 is usually very CPU friendly.

I have absolutely no idea.Even on the FC4 box the cpu utilization was
very high.Has any one come across the same problem.if anyone has come
across the same issue then kindly help me.

Regards
Preetish


[squid-users] Squid too slow.Please Help.Urgent

2007-08-08 Thread Preetish
Hi Everybody,

   We have a Squid 2.6.STABLE13 running on an
OpenBSD box along with packet filtering(earlier we used to run it on
FedoraCore4). The machine is a P4  3.4 GHz with  1 GB RAM and running
a cache of 30 GB. The external link speed is 4 Mbps. We use the DNS
server of our ISP .the internet connection is pathetic . The details
of the output of the squid client is as follows.


Squid Object Cache: Version 2.6.STABLE13
Start Time: Tue, 07 Aug 2007 23:55:51 GMT
Current Time:   Wed, 08 Aug 2007 10:19:24 GMT
Connection information for squid:
Number of clients accessing cache:  761
Number of HTTP requests received:   436323
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   699.7
Average ICP messages per minute since start:0.0
Select loop called: 578899 times, 64.628 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 28.4%, 60min: 23.5%
Byte Hit Ratios:5min: 19.2%, 60min: 19.5%
Request Memory Hit Ratios:  5min: 12.1%, 60min: 13.9%
Request Disk Hit Ratios:5min: 41.5%, 60min: 41.7%
Storage Swap size:  13067832 KB
Storage Mem size:   190880 KB
Mean Object Size:   19.80 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):  11.37373  8.22659
Cache Misses: 15.72468 12.00465
Cache Hits:5.06039  4.07741
Near Hits:14.89826 12.00465
Not-Modified Replies:  3.86308  3.28534
DNS Lookups:   6.80420  4.17707
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:37413.255 seconds
CPU Time:   34220.020 seconds
CPU Usage:  91.46%
CPU Usage, 5 minute avg:95.51%
CPU Usage, 60 minute avg:   97.02%
Process Data Segment Size via sbrk(): 0 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 65
Memory accounted for:
Total accounted:   275284 KB
memPoolAlloc calls: 52025731
memPoolFree calls: 49328868
File descriptor usage for squid:
Maximum number of file descriptors:   1024
Largest file desc currently in use:855
Number of file desc currently in use:  675
Files queued for open:   7
Available number of file descriptors:  342
Reserved number of file descriptors:   100
Store Disk files open:  27
IO loop method: kqueue
Internal Data Structures:
662573 StoreEntries
 29686 StoreEntries with MemObjects
 29517 Hot Object Cache Items
660137 on-disk objects




A few of my  Squid configuration which i think you guys may require
are as follows:

cache_mem 400 MB
maximum_object_size 20480 KB
maximum_object_size_in_memory 20 KB
fqdncache_size 4096
cache_dir aufs /var/squid/cache 32768 64 256
cache_dns_program /usr/local/libexec/dnsserver
dns_children 32
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 5223
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 5223# https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager all
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
# one who can access services on "localhost" is a local user
http_access deny to_localhost
reply_body_max_size 20971520 allow all
append_domain .xxx.xx.xx


Is the number of request on my server too high.even my CPU utilization
is too high.Do we need to upgrade the machine .Please help

Regards
Preetish