Re: [squid-users] Adding Another HardDisk
On Thu, 7 Aug 2008, Adrian Chadd wrote: It -should- just run the create command over each storedir and create missing ones; it might even create missing directories in each storedir. We've done it many times both in 2.6 and 2.7 with mixed stores of aufs + coss and it HAS behaved as described above (i.e. only creates missing ones and not touch existing + add extra subdirectories if missing) I do like the idea of using the dummy squid.conf for the -z operation to minimize service outage... adrian 2008/8/6 Amos Jeffries <[EMAIL PROTECTED]>: Adrian Chadd wrote: The upgrade should go smoothly. A 2.6 ufs/aufs/diskd store should work fine in 2.7. Also our upgrade from 2.6 to 2.7 went smoothly. :) Adrian do you know what the behavior on -z is for a mix of existing and absent cache_dirs? Amos Adrian 2008/8/6 Amos Jeffries <[EMAIL PROTECTED]>: Mr. Issa(*) wrote: Hello all , i really appretiate your hard work on helping and guiding users all over the word..:) I have 2 questions: If i want to add another hard-disk and to set it as a cache_dir for squid to use, but without messing/rebuildings the other cache_dir's that have cached objects second question , i want to upgrade squid from 2.6 to 2.7 and let squid 2.7 to use the cache_dir's that have cached objects without rebuilding them -z should ignore the existing cache_dir and build the new one. However I'm not certain enough of the store to guarantee that. To be extra safe you could do the following: - create a dummy squid.conf with just the new cache_dir. - run squid binary with -z and passing it the dummy squid.conf (to create new dir properly in isolation) - add the new cache_dir to real squid.conf and reconfigure main squid. As for the upgrade. A lot of work has gone into making those go smoothly. I believe its not a problem. But maybe one of the Squid-2 store experts will speak up. Amos -- Please use Squid 2.7.STABLE3 or 3.0.STABLE8 -- Please use Squid 2.7.STABLE3 or 3.0.STABLE8 --
Re: [squid-users] cachemgr storeEntries count and coss
replying to my own post, it does seem that cachemgr counts number of objects from all available cache_dirs. in my case, the number of objects of size 0-16kb was so huge that shifting them to a small 4G coss dir caused to drop total number of cached objects massively. I've no added 12G more to the coss dir and the number of objects is now picking up well. thanks any ways. Manoj On Tue, 22 Jul 2008, Manoj_Rajkarnikar wrote: Hi all. I've recently added a coss storage to our cache and is performing great. But I see that the cachemgr is showing the decrease in storeEntries count and also the mean object size has increased from ~15k to ~25k. before coss was introduced, we had 64G of cache_dir split into 2 scsi disks and ~4m objects in the storage. now we introduced 4G coss dir with max object size of 16k on a separate sata disk. and after 4 days of operation, coss storage is 82% full and the cachemgr reports that we only have ~2.4m objects in the storage. we're using squid 2.7S3. squid.conf: cache_dir coss /coss/cossfile 4096 block-size=512 max-stripe-waste=16384 max-size=16384 cache_dir aufs /sda1/cache0 30720 50 256 min-size=16384 max-size=18874368 cache_dir aufs /sdb1/cache0 30720 50 256 min-size=16384 max-size=18874368 cachemgr: Internal Data Structures: 2412947 StoreEntries 27350 StoreEntries with MemObjects 26900 Hot Object Cache Items 2410991 on-disk objects Cache information for squid: Request Hit Ratios: 5min: 37.6%, 60min: 39.8% Byte Hit Ratios:5min: 43.3%, 60min: 41.3% Request Memory Hit Ratios: 5min: 4.9%, 60min: 5.3% Request Disk Hit Ratios:5min: 48.3%, 60min: 48.4% Storage Swap size: 63239875 KB Storage Mem size: 261712 KB Mean Object Size: 26.29 KB does cachemgr count the store entries in coss stores as well?? Thanks Manoj --
[squid-users] cachemgr storeEntries count and coss
Hi all. I've recently added a coss storage to our cache and is performing great. But I see that the cachemgr is showing the decrease in storeEntries count and also the mean object size has increased from ~15k to ~25k. before coss was introduced, we had 64G of cache_dir split into 2 scsi disks and ~4m objects in the storage. now we introduced 4G coss dir with max object size of 16k on a separate sata disk. and after 4 days of operation, coss storage is 82% full and the cachemgr reports that we only have ~2.4m objects in the storage. we're using squid 2.7S3. squid.conf: cache_dir coss /coss/cossfile 4096 block-size=512 max-stripe-waste=16384 max-size=16384 cache_dir aufs /sda1/cache0 30720 50 256 min-size=16384 max-size=18874368 cache_dir aufs /sdb1/cache0 30720 50 256 min-size=16384 max-size=18874368 cachemgr: Internal Data Structures: 2412947 StoreEntries 27350 StoreEntries with MemObjects 26900 Hot Object Cache Items 2410991 on-disk objects Cache information for squid: Request Hit Ratios: 5min: 37.6%, 60min: 39.8% Byte Hit Ratios:5min: 43.3%, 60min: 41.3% Request Memory Hit Ratios: 5min: 4.9%, 60min: 5.3% Request Disk Hit Ratios:5min: 48.3%, 60min: 48.4% Storage Swap size: 63239875 KB Storage Mem size: 261712 KB Mean Object Size: 26.29 KB does cachemgr count the store entries in coss stores as well?? Thanks Manoj --
Re: [squid-users] upgrading from squid 2.6 to 2.7
On Sun, 29 Jun 2008, Adrian Chadd wrote: On Sun, Jun 29, 2008, Manoj_Rajkarnikar wrote: I see that there is an option to specify the number of threads for aufs. What is the optimum number of threads / what is the default number that squid uses?? can it be altered to have effect on squid performance?? The optimum number of threads depends entirely on your situation. It can be altered (and it should be a runtime tunable if it isn't! anyway..) and having too many threads can result in your Squid performing poorly - not because there's so many threads, but because it luls Squid's internal code into thinking the disk system can handle much more of a load than it can. In my situation, I have only one cache that handles about 14 Mbits of intercepted traffic on 2 x 36GB scsi disks. Linux is on a separate sata disk. CPU is a P4 with HT enabled. I can see 2 aio processes. I assume those are the asyncio threads I currently have. Please correct me if I'm mistaken. Bringing some kind of sanity to performance tuning storage is something I'd like to spend some time doing but my free time is all booked up at the moment, sorry. :0 We've seen tremendous increase in performance from 2.5 to 2.6/2.7. all credit goes to the squid dev team and to some extent, the user community bringing in patches and ideas. people are benefiting from it alot. Cheers... Manoj --
Re: [squid-users] upgrading from squid 2.6 to 2.7
On Tue, 10 Jun 2008, Adrian Chadd wrote: Besides whatever changes are in the release notes, I think you'll be fine. I tried reasonably hard to make 2.6 -> 2.7 a seamless update; the only surprises could be the storeUpdate stuff which Henrik included near the end of the development cycle. That can be turned off to fall back to the Squid-2.6 behaviour. Thanks Adrian. I see that there is an option to specify the number of threads for aufs. What is the optimum number of threads / what is the default number that squid uses?? can it be altered to have effect on squid performance?? Thanks Manoj On Tue, Jun 10, 2008, Manoj_Rajkarnikar wrote: Hi all. Any special point to note when upgrading from 2.6S19 to 2.7S2. I searched for any clues but found none. just tying to confirm. our cache is serving ~40% of our internet bandwidth and everything would go haywire if it goes down during upgrade. Thanks Manoj -- --
[squid-users] upgrading from squid 2.6 to 2.7
Hi all. Any special point to note when upgrading from 2.6S19 to 2.7S2. I searched for any clues but found none. just tying to confirm. our cache is serving ~40% of our internet bandwidth and everything would go haywire if it goes down during upgrade. Thanks Manoj --
Re: [squid-users] squid and wccp
IP xxx.yyy.zzz.16.1999 > 208.122.6.235.80: . ack 3193965999 win 65535 10:26:18.897020 IP xxx.yyy.zzz.123.4098 > 209.216.46.132.80: . ack 586983296 win 17424 10:26:18.897790 IP xxx.yyy.zzz.209.62383 > 203.84.204.69.80: . ack 1194719072 win 65114 10:26:18.897799 IP xxx.yyy.zzz.209.62383 > 203.84.204.69.80: F 0:0(0) ack 1 win 65114 1.5 iptables: echo 1 > /proc/sys/net/ipv4/ip_forward echo 0 > /proc/sys/net/ipv4/conf/default/rp_filter echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter echo 0 > /proc/sys/net/ipv4/conf/gre0/rp_filter /sbin/iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp -s --dport 80 -j REDIRECT --to-port /sbin/iptables -A INPUT -i gre0 -p tcp -s --dport -j ACCEPT 2. Router: 2.1 Router version: 7204VXR npe 300 IOS version 12.2(46a) 2.2 Config ip wccp version 2 ip wccp web-cache redirect-list SQUID-BYPASS-NEW interface FastEthernet0/0.128 description Connection to internet bandwidth 24000 encapsulation dot1Q 128 ip address xxx.xxx.xxx.201 255.255.255.252 ip access-group PORT_BLOCK in ip access-group PORT_BLOCK out ip wccp web-cache redirect out no cdp enable Router#sh ip wccp web-cache detail WCCP Cache-Engine information: Web Cache ID: xxx.xxx.xxx.234 Protocol Version: 2.0 State: Usable Initial Hash Info: Assigned Hash Info: Hash Allotment:256 (100.00%) Packets Redirected:1166385116 Connect Time: 3w3d Router#sh ip wccp web-cache Global WCCP information: Router information: Router Identifier: xxx.xxx.xxx.226 Protocol Version:2.0 Service Identifier: web-cache Number of Cache Engines: 1 Number of routers: 1 Total Packets Redirected:553854367 Redirect access-list: SQUID-BYPASS-NEW Total Packets Denied Redirect: 1050502969 Total Packets Unassigned:126368 Group access-list: -none- Total Messages Denied to Group: 0 Total Authentication failures: 0 ### That's it... working great for us. - Original Message - From: "Manoj_Rajkarnikar" <[EMAIL PROTECTED]> To: "Wennie V. Lagmay" <[EMAIL PROTECTED]> Cc: "squid-users" Sent: Monday, April 28, 2008 2:22:34 PM (GMT+0300) Asia/Kuwait Subject: Re: [squid-users] squid and wccp On Mon, 28 Apr 2008, Wennie V. Lagmay wrote: I am trying to configure squid wccp and cisco router but with no luck. This is what I have done. Please check my procedure and confoguration: for squid version 2.6Stable19 running on Fedora Core 8 64 bit with ip address xx.xx.184.178 1. I configure squid with options enable-linux-netfilter please provide output of "squid -v" --
Re: [squid-users] squid and wccp
On Mon, 28 Apr 2008, Wennie V. Lagmay wrote: I am trying to configure squid wccp and cisco router but with no luck. This is what I have done. Please check my procedure and confoguration: for squid version 2.6Stable19 running on Fedora Core 8 64 bit with ip address xx.xx.184.178 1. I configure squid with options enable-linux-netfilter please provide output of "squid -v" 2. in squid.conf http_port 8080 transparent wccp2_router xx.xx.184.177 wccp2_version 4 wccp2_forwarding_method 1 wccp2_return_method 1 wccp2_service standard 0 wccp2_address 0.0.0.0 3. modprobe ip_gre ip tunnel add wccp0 mode gre remote xx.xx.184.177 local xx.xx.184.178 dev eth1 ip addr add xx.xx.184.178/32 dev wccp0 ip link set wccp0 up 4.echo 0 >/proc/sys/net/ipv4/conf/wccp0/rp_filter 5.iptables -t nat -A PREROUTING -p tcp -i wccp0 -j REDIRECT --to-ports 8080 6. iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 On Cisco router7206 npe300 with 12.2(31) ip wccp version 2 ip wccp web-cache ! interface fastethernet 1/0 description LAN ip address 192.168.255.6 255.255.255.252 ! interface fastethernet 3/0 description internet connection ip address xx.xx.184.177 ip wccp web-cache redirect out ! ip route 0.0.0.0 0.0.0.0 192.158.255.5 which interface connects to internet. default route indicates fa1/0 to be connected to internet. if it is fa1/0 the "ip wccp web-cache redirect out" command should be in fa1/0. Logs: with linux cache.log I can see messages as: wccp2HereIam: Sending to device id 0 Sending HereIam packet size 144 Incoming WCCPv2 I_SEE_YOU lenth 132 Complete packet receive In Cisco router: sho ip wccp web-cache Global WCCP information: Router information: Router Identifier: 192.168.255.6 Protocol Version:2.0 Service Identifier: web-cache Number of Cache Engines: 1 Number of routers: 1 Total Packets Redirected:201 Redirect access-list:-none- Total Packets Denied Redirect: 0 Total Packets Unassigned:0 Group access-list: -none- Total Messages Denied to Group: 0 Total Authentication failures: 0 sho ip wccp web-cache detail Web Cache ID: xx.xx.184.178 Protocol Version: 2.0 State: Usable Initial Hash Info: Assigned Hash Info: Hash Allotment:256 (100.00%) Packets Redirected:201 Connect Time: 01:14:03 what about tcpdump on wccp0 interface.. does show any traffic being redirected. does access.log show the connections?? It seems everything is working fine but configuring client browser without any proxy it is not browsing. note that if I manually define the Ip address of the transparent proxy I can browse the web. Can anybody help me on my problem? thank you very much, Wennie - Original Message - From: "Adrian Chadd" <[EMAIL PROTECTED]> To: "Wennie V. Lagmay" <[EMAIL PROTECTED]> Cc: "Adrian Chadd" <[EMAIL PROTECTED]>, "squid-users" Sent: Saturday, April 26, 2008 8:31:43 PM (GMT+0300) Asia/Kuwait Subject: Re: [squid-users] squid and wccp On Sat, Apr 26, 2008, Wennie V. Lagmay wrote: I have a question, do I need to enable ip_gre, ip_wccp on my system? using kernel 2.6.24, i enable the ip_gre does it mean it aoutmatically enables the ip_wccp? Just ip_gre. the GRE code shipped in linux these days includes WCCPv2 packet decoding. HTH, Adrian thanks - Original Message - From: "Adrian Chadd" <[EMAIL PROTECTED]> To: "Wennie V. Lagmay" <[EMAIL PROTECTED]> Cc: "squid-users" Sent: Saturday, April 26, 2008 12:38:07 PM (GMT+0300) Asia/Kuwait Subject: Re: [squid-users] squid and wccp http://wiki.squid-cache.org/ConfigExamples/ Adrian On Sat, Apr 26, 2008, Wennie V. Lagmay wrote: Hi all, Can anybody give me a step by step configuration to enable WCCP in both router and squid2.6.stable19. Here are the details: router = cisco7206VXR IOS ver = 12.3 (8) T, RELEASE SOFTWARE (fc2) FE0/0 = xx.xx.184.17/28 squid: OS = FC8 64bit with kernel version 2.6.24.4-64.fc8 #1 SMP squid version = squid-2.6Stable19 eth1 = xx.xx.184.22/28 I am trying to follow the configuration in squid FAQ but it is very hard for me because this my first time to do thus kind of setup. I would highly appreciate if you can provide me a step by step configuration for cisco router and squid box to enable WCCP version 2 Thank you and best regards, wennie -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA - --
Re: [squid-users] Marking Cached traffic..
On Wed, 16 Apr 2008, Adrian Chadd wrote: On Wed, Apr 16, 2008, Stephan Viljoen wrote: HI There, I was wondering whether it's posible to mark cached traffic with a different TOS then uncached traffic. I need to come up with a way of passing cached traffic through our bandwidth manager without taxing the end user for it. Basically giving them the full benefits of the proxy server. There's the http://zph.bratcheda.org/ stuff. We've been using this one since the 2.5 days and is doing the job pretty well. Seems quite stable. We're currently using 2.6S19. I'm probably going to roll it into my private tree after I've stabilised the codebase. Someone's asked me about it. will we be seeing it in the mainline tree some time in the future ?? Adrian --
Re: [squid-users] TCP_HIT and TCP_MISS
On Mon, 17 Mar 2008, Guillaume Chartrand wrote: Hi everybody, I run Squid2.6Stable12 for few months ago and when I look to my access.log I have no TCP_HIT, I just have TCP_MISS, so it's seem to cache nothing. And if I look to the cache.log I have almost just entry like : httpReadReply: Excess data from or WARNING! Your cache is running out of filedescriptors 2008/03/17 07:23:54| WARNING: All url_rewriter processes are busy. 2008/03/17 07:23:54| WARNING: up to 27 pending requests queued Some information from "squid -v" and squid.conf please.. My configuration is a transparent squid with WCCP router to redirect without client configuration. So what I did wrong Guillaume Chartrand Technicien informatique Cégep régional de Lanaudière Centre administratif, Repentigny (450) 470-0911 poste 7218 --
Re: [squid-users] problem with wccp v2 and cisco
On Mon, 25 Feb 2008, Adrian Chadd wrote: On Mon, Feb 25, 2008, Manoj_Rajkarnikar wrote: I have much simpler setup working on CentOS x86_64 2.6.23 and cisco 7204VXR IOS version 12.2(46a). squid version 2.6 STABLE17: Which IOS release specifically? Could you throw me a "show version" ? Sure.. iris>sh ver Cisco Internetwork Operating System Software IOS (tm) 7200 Software (C7200-IK9O3S-M), Version 12.2(46a), RELEASE SOFTWARE (fc1) Copyright (c) 1986-2007 by cisco Systems, Inc. Compiled Thu 12-Jul-07 00:39 by pwade Image text-base: 0x60008940, data-base: 0x6148E9F0 ROM: System Bootstrap, Version 12.1(2824:081033) [dbeazley-cosmos_e_LATEST 101], DEVELOPMENT SOFTWARE BOOTLDR: 7200 Software (C7200-BOOT-M), Version 12.0(15)S, EARLY DEPLOYMENT RELEASE SOFTWARE (fc1 ) iris uptime is 7 weeks, 1 day, 7 hours, 27 minutes System returned to ROM by power-on System restarted at 06:15:24 NP Mon Jan 7 2008 System image file is "slot0:c7200-ik9o3s-mz.122-46a.bin" iris>sh ip wccp Global WCCP information: Router information: Router Identifier: XXX.XXX.XXX.XXX Protocol Version:2.0 Service Identifier: web-cache Number of Cache Engines: 1 Number of routers: 1 Total Packets Redirected:2016105442 Redirect access-list:SQUID-BYPASS-NEW Total Packets Denied Redirect: 471709239 Total Packets Unassigned:79667 Group access-list: -none- Total Messages Denied to Group: 0 Total Authentication failures: 0 I'll start a wiki page with "known good" versions of IOS that work with Squid. (And those of you who are running Squid+WCCPv2, please fire off your "show version" and "show ip wccp" related outputs so I can update the list.) Thanks! Adrian * recompile kernel with CONFIG_NET_IPGRE=m * compile squid with wccpv2 support * setup gre0 interface with some unused private IP assigned to it * intercept in iptables :- iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j REDIRECT --to-port 3128 As Henrik suggested me during this setup, IPGRE module in kernel automatically decapsulates the gre packets on the gre0 interface and it has been doing it so far. gre0 Link encap:UNSPEC HWaddr 00-00-00-00-FF-F8-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.172.2 Mask:255.255.255.252 UP RUNNING NOARP MTU:1476 Metric:1 RX packets:1970129052 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:3666 dropped:0 overruns:0 carrier:0 collisions:3666 txqueuelen:0 RX bytes:305795313631 (284.7 GiB) TX bytes:0 (0.0 b) [EMAIL PROTECTED] ~]# cat /etc/sysconfig/network-scripts/ifcfg-gre0 DEVICE=gre0 BOOTPROTO=static BROADCAST=192.168.172.3 IPADDR=192.168.172.2 NETMASK=255.255.255.252 NETWORK=192.168.172.0 ONBOOT=yes TYPE=Ethernet This setup has been working nicely for me. Manoj Adrian -- --
Re: [squid-users] problem with wccp v2 and cisco
On Sun, 24 Feb 2008, Adrian Chadd wrote: There's only a small number of things you have to do to setup WCCPv2. * configure/compile squid with the relevant transparent interception option. For you its --enable-linux-netfilter IIRC. * enable ip forwarding in linux * create gre * point GRE endpoint at your router's WCCPv2 routerid - use a loopback interface on the Cisco for now, that'll make it much, much more predictable as the wccpv2 routerid is then always loopback id * for ease of testing, make sure no iptables rules exist, then add: iptables -A PREROUTING -i -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I have much simpler setup working on CentOS x86_64 2.6.23 and cisco 7204VXR IOS version 12.2(46a). squid version 2.6 STABLE17: * recompile kernel with CONFIG_NET_IPGRE=m * compile squid with wccpv2 support * setup gre0 interface with some unused private IP assigned to it * intercept in iptables :- iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j REDIRECT --to-port 3128 As Henrik suggested me during this setup, IPGRE module in kernel automatically decapsulates the gre packets on the gre0 interface and it has been doing it so far. gre0 Link encap:UNSPEC HWaddr 00-00-00-00-FF-F8-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.172.2 Mask:255.255.255.252 UP RUNNING NOARP MTU:1476 Metric:1 RX packets:1970129052 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:3666 dropped:0 overruns:0 carrier:0 collisions:3666 txqueuelen:0 RX bytes:305795313631 (284.7 GiB) TX bytes:0 (0.0 b) [EMAIL PROTECTED] ~]# cat /etc/sysconfig/network-scripts/ifcfg-gre0 DEVICE=gre0 BOOTPROTO=static BROADCAST=192.168.172.3 IPADDR=192.168.172.2 NETMASK=255.255.255.252 NETWORK=192.168.172.0 ONBOOT=yes TYPE=Ethernet This setup has been working nicely for me. Manoj Adrian --
Re: [squid-users] Unable to increase filedescriptor limit -- tried all things
On Fri, 25 Jan 2008, bijayant kumar wrote: Hi Arana, Thanks for your reply. As you are suggesting in your reply that incresing the filedescriptor can be dangerous. Is there any other way to get rid of this warning, because this warning makes browsing dead slow,and the box is deployed at our client place. I have to do things fast. If you have any other suggestion besides the increasing file descriptor please suggest me. No AFAIK. you'll have to raise the FD limit but don't raise it to tooo high - that was the suggestion.. set it to 2048 or 4096 to meet the current and near-future workload requirement and increase it again in the future if needed... --- Gonzalo Arana <[EMAIL PROTECTED]> wrote: I would recommend you to run ./configure with --with-maxfd=you_desired_limit and --enable-epoll Watch for messages like this in configure output: checking if epoll works... yes Using epoll for the IO loop. ... Maximum filedescriptors set to 131072 ... Having large number of FDs with select is dangerous. Also, I recall there was an issue on increasing FD_SETSIZE on glibc (Linux uses glibc). HTH, On Jan 24, 2008 11:46 AM, Bijayant <[EMAIL PROTECTED]> wrote: Hello list, I am using squid as proxy server on gentoo box. All of a sudden from 2nd January in my cache.log i am seeing the error WARNING! Your cache is running out of filedescriptors When this messages repeats frequently, browsing becomes dead slow in 2mbps line. We have 2GB RAM, and 1 GB swap , dual core processor system. After googling, checking Squid Faq i have tried to increase the limit of filedescriptors on my system. But i am not able to do. Please help me out. here i am giving some information for better picture OS - gentoo Kernel - 2.6.18-gentoo-r6 Squid - net-proxy/squid-2.6.12 USE Flags=ipf-transparent pam ssl I have changed the filedescriptors in /usr/include/bits/typesizes.h Number of descriptors that can fit in an `fd_set' #define __FD_SETSIZE2048 In /etc/init.d/squid ulimit -HSn 2048 ~ $ cat /proc/sys/fs/file-max 50516 The relevant part of /etc/squid/squid.conf after search on google/faq client_persistent_connections off server_persistent_connections off cache_dir ufs /var/cache/squid 2000 16 256 url_rewrite_children 30 I did all things specified in Squid Wiki and Faq. After that i have recompiled the squid and rebooted my machine also without any luck. I am still getting the warning in my logs, and ulimit -n as 1024. I have tried all possible things without any success. Please help me or give me some direction. -- Gonzalo A. Arana Bijayant Kumar Send instant messages to your online friends http://uk.messenger.yahoo.com --
Re: [squid-users] Mem Cache flush
On Wed, 23 Jan 2008, Matus UHLAR - fantomas wrote: On Fri, 18 Jan 2008, Haytham KHOUJA wrote: I don't advise you to use partitions as separate cache_dirs. You will not get performance enhancement since you're still working on the same physical disk and same SCSI controller. On 20.01.08 11:57, Manoj_Rajkarnikar wrote: Its not 2 partitions on 1 physical disk, I have 2 scsi disks... controller however is same. you seem to have 4 cache_dirs on each of those 2 disks. It's useless and inefficient. Just use one cache_dir per disk (unless you use different storages, e.g. COSS) Hmm... so if I use only cache_dir per disk, I think I'll need to increase the first level directory too.. I currently have ~4M objects in my cache split onto 2 disks (~2M objects per disk). so as I understand, 2M objects / (256 objects per 2nd level dir x 256 2nd level dirs per 1st level dir) = ~ 30+ 1st level dirs.. Am I calculating wrong here?? Please also give some hints on how would it ineffeciently effect squid by putting multiple cache_dirs on single disk... Thanks. Manoj --
Re: [squid-users] Mem Cache flush
Hi Adrian, On Sat, 19 Jan 2008, Adrian Chadd wrote: If you think there's a bug then please submit a bugzilla report. Not sure if its a bug. don't have a situation where it can be reproduced... infact, not even sure if it will occur again... can you please suggest me how I can gather more information to submit if it occurs again.. Thanks Manoj Adrian In-Reply-To: <[EMAIL PROTECTED]>
Re: [squid-users] Mem Cache flush
Hi Haytham, On Fri, 18 Jan 2008, Haytham KHOUJA wrote: Hello, I don't advise you to use partitions as separate cache_dirs. You will not get performance enhancement since you're still working on the same physical disk and same SCSI controller. Its not 2 partitions on 1 physical disk, I have 2 scsi disks... controller however is same. Moreover, 1GB for cache_mem is way to much if you only have 3GB of total RAMs if you have 8x8GB cache_dir. Lower that to around 256 MB ~ 512 MB and try. Let me try that... I still have free mem though. by the rule of thumb, we need 10MB Ram for indexing per GB of cache. that sums up to 8 x 8GB x 10MB/GB = ~650MB RAM for cache index + 1GB for Cache_mem = ~1.7GB. so I'd still have ~1.3GB of RAM for system operation.. I'll lower the cache_mem, however, to try.. Thanks. Manoj Hi all. I'm having an issue with our cache. its a 2.6S17 on centOS 4 x86_64 with 3GB Ram 2x36GB scsi disk for cache_dir and separate SATA 80GB for system. It seem to flush the mem cache for no apparent reason.. nothing in the cache log around the time of occurance... please see the graph in the following link.. it has done it twice in 10 days. we've not encountered such issue. relative squid.conf settings follows: cache_mem 1024 MB maximum_object_size_in_memory 16 KB cache_replacement_policy heap LRU cache_dir aufs /sda1/cache0 8000 20 256 cache_dir aufs /sda1/cache1 8000 20 256 cache_dir aufs /sda1/cache2 8000 20 256 cache_dir aufs /sda1/cache3 8000 20 256 cache_dir aufs /sdb1/cache0 8000 20 256 cache_dir aufs /sdb1/cache1 8000 20 256 cache_dir aufs /sdb1/cache2 8000 20 256 cache_dir aufs /sdb1/cache3 8000 20 256 maximum_object_size 18432 KB cache_swap_low 95 cache_swap_high 97 access_log /usr/local/squid/var/logs/access.log squid cache_store_log none logfile_rotate 7 2 36GB scsi disks are mounted on /sda1 and /sdb1 as /dev/sda1 on /sda1 type reiserfs (rw,noatime,notail) /dev/sdb1 on /sdb1 type reiserfs (rw,noatime,notail) I can see lots of entries in cache.log that looks like: 2008/01/18 09:24:18| storeLocateVary: Not our vary marker object, 1B384B5F6E91A0FE402D4DA6F9B067B0 = 'http://x.tagstat.com/js/announcements.js', 'accept-encoding="gzip,%20deflate"'/'gzip, deflate' Any suggestions please.. Thanks Manoj -- --
[squid-users] Re: Mem Cache flush
Ooops.. forgot to post the link of graph... http://vianet.com.np/cache/ Thanks Manoj On Fri, 18 Jan 2008, Manoj_Rajkarnikar wrote: Hi all. I'm having an issue with our cache. its a 2.6S17 on centOS 4 x86_64 with 3GB Ram 2x36GB scsi disk for cache_dir and separate SATA 80GB for system. It seem to flush the mem cache for no apparent reason.. nothing in the cache log around the time of occurance... please see the graph in the following link.. it has done it twice in 10 days. we've not encountered such issue. relative squid.conf settings follows: cache_mem 1024 MB maximum_object_size_in_memory 16 KB cache_replacement_policy heap LRU cache_dir aufs /sda1/cache0 8000 20 256 cache_dir aufs /sda1/cache1 8000 20 256 cache_dir aufs /sda1/cache2 8000 20 256 cache_dir aufs /sda1/cache3 8000 20 256 cache_dir aufs /sdb1/cache0 8000 20 256 cache_dir aufs /sdb1/cache1 8000 20 256 cache_dir aufs /sdb1/cache2 8000 20 256 cache_dir aufs /sdb1/cache3 8000 20 256 maximum_object_size 18432 KB cache_swap_low 95 cache_swap_high 97 access_log /usr/local/squid/var/logs/access.log squid cache_store_log none logfile_rotate 7 2 36GB scsi disks are mounted on /sda1 and /sdb1 as /dev/sda1 on /sda1 type reiserfs (rw,noatime,notail) /dev/sdb1 on /sdb1 type reiserfs (rw,noatime,notail) I can see lots of entries in cache.log that looks like: 2008/01/18 09:24:18| storeLocateVary: Not our vary marker object, 1B384B5F6E91A0FE402D4DA6F9B067B0 = 'http://x.tagstat.com/js/announcements.js', 'accept-encoding="gzip,%20deflate"'/'gzip, deflate' Any suggestions please.. Thanks Manoj --
[squid-users] Mem Cache flush
Hi all. I'm having an issue with our cache. its a 2.6S17 on centOS 4 x86_64 with 3GB Ram 2x36GB scsi disk for cache_dir and separate SATA 80GB for system. It seem to flush the mem cache for no apparent reason.. nothing in the cache log around the time of occurance... please see the graph in the following link.. it has done it twice in 10 days. we've not encountered such issue. relative squid.conf settings follows: cache_mem 1024 MB maximum_object_size_in_memory 16 KB cache_replacement_policy heap LRU cache_dir aufs /sda1/cache0 8000 20 256 cache_dir aufs /sda1/cache1 8000 20 256 cache_dir aufs /sda1/cache2 8000 20 256 cache_dir aufs /sda1/cache3 8000 20 256 cache_dir aufs /sdb1/cache0 8000 20 256 cache_dir aufs /sdb1/cache1 8000 20 256 cache_dir aufs /sdb1/cache2 8000 20 256 cache_dir aufs /sdb1/cache3 8000 20 256 maximum_object_size 18432 KB cache_swap_low 95 cache_swap_high 97 access_log /usr/local/squid/var/logs/access.log squid cache_store_log none logfile_rotate 7 2 36GB scsi disks are mounted on /sda1 and /sdb1 as /dev/sda1 on /sda1 type reiserfs (rw,noatime,notail) /dev/sdb1 on /sdb1 type reiserfs (rw,noatime,notail) I can see lots of entries in cache.log that looks like: 2008/01/18 09:24:18| storeLocateVary: Not our vary marker object, 1B384B5F6E91A0FE402D4DA6F9B067B0 = 'http://x.tagstat.com/js/announcements.js', 'accept-encoding="gzip,%20deflate"'/'gzip, deflate' Any suggestions please.. Thanks Manoj --
Re: [squid-users] cache_dir
On Wed, 9 Jan 2008, [EMAIL PROTECTED] wrote: I have been asked to continue proxying connections out to the Internet, but to discontinue caching web traffic. After reading the FAQ and the config guide (2.6STABLE12) I found that: 'cache_dir null' Is the approach. It's failing. The error is: Daemon: FATAL: Bungled squid.conf line 19: cache_dir null For giggles, I even tried giving cache_dir null an option of a directory, which also failed. The FAQ shows that this option is not enabled or available in the default build of squid. I'm reading through the configure script trying to find some verbiage that might help me locate the compile option. Here are the compile-time options I'm setting for this build: ./configure --prefix=/services/proxy --enable-icmp --enable-snmp --enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl --enable-ssl --disable-select --disable-poll --enable-epoll -enable-large-cache-files --disable-ident-lookups --enable-stacktraces --with-large-files && make && make install need to add configure option --enable-storeio=null in above. a snip from configure help.. --enable-storeio="list of modules" Build support for the list of store I/O modules. The default is only to build the "ufs" module. See src/fs for a list of available modules, or Programmers Guide section for details on how to build your custom store module Manoj Can someone please help me disable local caching? Thanks, Tim Rainier --
Re: [squid-users] how to increase memory hit ratio
On Tue, 8 Jan 2008, Alexandre Correa wrote: i´m using GDSF for memory_replacement and LFUDA for disk !! i will increase cache_mem to 512 MB and maximum_object_site_in_memory to 512 KB .. and see what´s happens.. what's the total size of your cache_dir. If its under 200GB you can safely increase cache_mem size to 1GB. Manoj .. load of this servers is very low... thanks for answers !! :) --
Re: Fwd: [squid-users] aufs cache_dir growing beyond configured limits?
On Thu, 3 Jan 2008, Adrian Chadd wrote: Sorry, no, the actual UNIX filesystem. :) Ok got it. I'm using reiser v3 with notail and noatime mount options. should I be worrying too much over free space on my cache partitions? Thanks Manoj Adrian --
Re: Fwd: [squid-users] aufs cache_dir growing beyond configured limits?
On Thu, 3 Jan 2008, Adrian Chadd wrote: On Thu, Jan 03, 2008, Manoj_Rajkarnikar wrote: Thanks. I'll stick with my current 10% then. :) If you're running a UFS derivative then see when the FS changes optimisation from SPACE to TIME and back. I'm stumped here... I really didn't understand the above line. are you trying to say that when the free space in the disk goes low, AUFS (that's whan I'm using) starts to optimize for space rather than service times?? Manoj --
Re: Fwd: [squid-users] aufs cache_dir growing beyond configured limits?
On Wed, 2 Jan 2008, Henrik Nordstrom wrote: On ons, 2008-01-02 at 10:56 +0545, Manoj_Rajkarnikar wrote: Is it really REQUIRED to leave 30% of free space No, not strictly required, but a good advice regardless. - Most filesystems do a much better job with at least 30% free. - Nearly all filesystems keep functioning reasonably well even with much less free space. Thanks. I'll stick with my current 10% then. :) Regards Henrik --
Re: Fwd: [squid-users] aufs cache_dir growing beyond configured limits?
Hi Henrik, sorry to hijack the topic. On Tue, 1 Jan 2008, Henrik Nordstrom wrote: On mn, 2007-12-31 at 08:48 -0800, Neil Harkins wrote: On 12/30/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote: 950 MB == 972800 KB Yes, Adrian pointed that out a few weeks ago. Got any other recommendations on the points below? No, there is not much more to be said. Remember to issue "squid -k rotate" periodically, to let Squid compact the swap.state index/log. And always leave about 30% free space in the cache partition, or the filesystem will degrade noticeably. Is it really REQUIRED to leave 30% of free space on the partition / under what condition would it degrade the filesystem if less than 30% free space on partition. I'm using reiser v3 and only have about 10% free space on both of my cachedir partitions and have not yet seen any performance degrade. I'm getting 45-50% byte hit and similar request hits. I was short on disk space so I'm pushing it hard but it would be great to know the pitfalls of my scenario and would decrease the cache size if required. client_http.requests = 136.414047/sec client_http.hits = 68.765346/sec client_http.errors = 0.00/sec client_http.kbytes_in = 99.904748/sec client_http.kbytes_out = 1027.876928/sec client_http.all_median_svc_time = 0.273318 seconds client_http.miss_median_svc_time = 1.311657 seconds client_http.nm_median_svc_time = 0.000911 seconds client_http.nh_median_svc_time = 1.242674 seconds client_http.hit_median_svc_time = 0.010350 seconds Connection information for squid: Number of clients accessing cache: 447 Number of HTTP requests received: 107420517 Number of ICP messages received:2747673 Number of ICP messages sent:2748084 Number of queued ICP replies: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 3577.4 Average ICP messages per minute since start:183.0 Select loop called: 891765974 times, 2.020 ms avg Cache information for squid: Request Hit Ratios: 5min: 50.4%, 60min: 47.9% Byte Hit Ratios:5min: 49.9%, 60min: 46.8% Request Memory Hit Ratios: 5min: 14.3%, 60min: 16.0% Request Disk Hit Ratios:5min: 38.4%, 60min: 37.8% Storage Swap size: 62259262 KB Storage Mem size: 1048816 KB Mean Object Size: 15.68 KB Requests given to unlinkd: 0 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.27332 0.61549 Cache Misses: 1.31166 1.31166 Cache Hits:0.01035 0.00865 Near Hits: 1.24267 1.24267 Not-Modified Replies: 0.00091 0.00091 DNS Lookups: 0.00278 0.00278 ICP Queries: 0.0 0.0 these are my cache partitions: [EMAIL PROTECTED] ~]# df -h FilesystemSize Used Avail Use% Mounted on /dev/sda1 35G 31G 3.5G 90% /sda1 /dev/sdb1 35G 32G 3.0G 92% /sdb1 cache_dir aufs /sda1/cache0 8000 20 256 cache_dir aufs /sda1/cache1 8000 20 256 cache_dir aufs /sda1/cache2 8000 20 256 cache_dir aufs /sda1/cache3 8000 20 256 cache_dir aufs /sdb1/cache0 8000 20 256 cache_dir aufs /sdb1/cache1 8000 20 256 cache_dir aufs /sdb1/cache2 8000 20 256 cache_dir aufs /sdb1/cache3 8000 20 256 I'm using squid 2.6S17 on centOS. Thanks. Manoj --
Re: [squid-users] 2.7 vs 3.0
On Sun, 23 Dec 2007, Ralf Hildebrandt wrote: * Matus UHLAR - fantomas <[EMAIL PROTECTED]>: I recently switchteed to 3.0 and found it (for my purposed) to be MORE stable than 2.6.x, which would crash ever so often what's "the" purpose? Normal caching :) I don't remember I've seen 2.6 crash... Oh it used to crash here from time to time. We've been using it since its first stable release and we've not had a single crash so far. + we also used to have numerous power failures but every time squid came back ok. Manoj --
Re: [squid-users] No great results after 2 weeks with squid
On Tue, 18 Dec 2007, Amos Jeffries wrote: Hi List, I've being testing and studying squid for almost two weeks now and I'm getting no results. I already understood the problems related to http headers where in most cases web servers administrators or programmers are creating more and more dynamic data which is bad for caching. So, I installed CentOS 5 along with 2.6.STABLE6 using yum install and set only an ACL for my internal network. After that I set also visible_hostname to localhost since quid was complaining about it. Your DNS is broken silghtly. Any web-service mserver should have a FQDN for its hostname. Many programs like squid use the hostname in their connections outward, and many validate all connecting hosts before accepting data traffic. Now, as I a stated already I read a lot regarding to squid including some tips in order to optimize sda access or increasing memory size limit but shouldn't squid be working great out-of-the-box?! Oh, I It does ... for a generic 1998-era server. To work these days the configuration is very site-specific. forgot my problem is that on mysar that I installed in order to see the performance I only see 0% of TRAFFIC CACHE PERCENT when already visited almost 300 websites. In some ocassions I see 10% or even 30/40% but for almost of 98% of websites I get 0%. The would be ones including '?' in the URI methinks. So my questions are: - Should Squid be taking only in consideration for large environments with hundreds or even thousands of people accessing web?! - In these days a proxy like Squid for caching purposes is more a "have to have" or a "must to have" when for almost every site proxy's are skipped and the wan speed access are increasing every day now!? Thanks! By the way: I intend use Squid for caching purposes only since I already have Cisco based QOS and bandwidth management. My deploying site as only at most 5 people accessing web simultaneous under a 8Mb dsl connection. Well then as said earlier, you need more than 100MB of data cache, and probably more than 64MB of RAM cache. My current config is: http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY Right here you are non-caching a LOT of websites, some of which are actually cachable. We now recommend using 2.6STABLE17 with some new refresh_pattern set instead. refresh_pattern cgi-bin 0 0% 0 refresh_pattern \? 0 0% 0 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 also add these refresh_pattern lines here and see if it helps... refresh_pattern -i \.exe$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.zip$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.tar\.gz$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.tgz$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.mp3$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.ram$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.jpeg$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.gif$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.wav$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.avi$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.mpeg$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.mpg$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.pdf$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.ps$10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.Z$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.doc$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.ppt$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.tiff$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.snd$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.jpe$ 10080 90% 99 reload-into-ims ignore-no-cache override-expire ignore-private refresh_pattern -i \.midi$ 10080 90%
Re: [squid-users] transparent squid and ustream.tv
On Tue, 18 Dec 2007, [EMAIL PROTECTED] wrote: Hi, I am having trouble accessing http://www.ustream.tv videos when connected through my Squid, is there a known fix for this problem ?. I tried the always_direct command but with no success. I am using squid-2.5.STABLE14 Please upgrade to the latest stable version... http://www.squid-cache.org/Versions/ Could anybody please help me ? Thanks Samy --
Re: [squid-users] Caching Expired Objects - One Small Step Forward - WARNING
Ah.. a Gotcha !! Noted. Thanks Manoj On Mon, 8 Oct 2007, Solomon Asare wrote: Hi All, pls should anyone want to try the scripts I posted, do not use /tmp. I irretrievably trashed my 300+ rules which have been built over a week. I now have to crawl up all over again. I wonder why I didn't see this coming. You may use /var/log/squid/. Thanks, solomon. --
Re: [squid-users] Caching Expired Objects - One Small Step Forward
On Sun, 7 Oct 2007, Solomon Asare wrote: Hi All, the long skelatel howto: 8. COMMENTS This is only one of many ways that this goal can be achieved and certailnly not the best being a non-guru, although a determined linux user. I guess the features I couldn't find in squid2.6-stable that made me add apache may be included in later stable releases to make this irrelevant. Unfortunately, I did not keep any logs whilst doing this so I amy have skipped a few steps. If I have, it will show sooner or later. I have tried to put together as much of the info that I think someone might need. The squid mailing list proved very helpful, and I am very grateful. There are many on the list prepared to help, although you may come accross a few who will repeatedly tell you how easy it is to do what you want to do without sharing how. Don't dispare if you bump into them. Great job solomon. Many of us have been trying to achieve similar with youtube and google vids. this will help a great deal. how big of a cachedir do you keep for youtube vids. should be quite a big to be able to cache the vids in large enough quantity to get a decent hit. I'm gonna try to achieve what you've describe here for my next project. Thanks for the job well done. Manoj Regards, solomon. --
Re: [squid-users] Squid Running out of Disk space
On Wed, 26 Sep 2007, Abdock wrote: Hello All, I have a single HDD, 72Gb and have configured Squid with the below parameters, but it just runs out of disk space since SQUID 2.6 STABLE 15 Upgrade to the latest stable. http://squidproxy.wordpress.com/2007/09/03/dont-upgrade-to-squid-26stable15-skip-straight-to-squid-26stable16/ Manoj cache_dir aufs /usr/local/squid/var/cache 5 16 256 After a week the box runs out of Disk Space. Can anybody help on this one ? Thanks. --
Re: [squid-users] block on browser type?
On Tue, 25 Sep 2007, Adrian Chadd wrote: Hm, isn't there an option to block based on user-agent? User-Agent is just another header after all. yes there is. acl BROWSER browser -i Mozilla \(compatible; MSIE\) This should match mozilla and msie based browsers. Manoj If in doubt, it definitely should be doable via an external-acl helper. I know you can pass it arbitrary headers.. Adrian On Mon, Sep 24, 2007, nairb rotsak wrote: Hello all, I searched and couldn't find a way to do this. We are trying to block IE 7. We have citrix farms set up with IE 6, Squid and Dansguardian. There are a few rogue people (think political here.. we can just lock down anything not coming from the Squid box) that believe they are fairly technical. They hold positions which allow them to demand 'software installability'. So they decide from time to time to upgrade to IE 7 and it is just a disaster. But we would find out a lot sooner if they lost internet when they did it. I have just started to use req_mime_type for applications.. so I thought there might be some specific way of getting this to recognize IE 7. thanks, ipguru99 Luggage? GPS? Comic books? Check out fitting gifts for grads at Yahoo! Search http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz --
Re: [squid-users] webmails are not accessible - SQUID 2.5.STABLE12
On Thu, 6 Sep 2007, Tek Bahadur Limbu wrote: Hi Simsam, [EMAIL PROTECTED] wrote: Hi Peter, No, this is only the https rule, I wrote it done to illustrate that the https ports are open. All http traffic are opened. Could you please give me the commands needed to install SQUID 2.6 according to tek's advise. I got the file from the site, I have some worries as the upgrade might affect the current setup! Did you install Squid-2.5 with SUSE's package management tool or did you install it from source? Which ever method you had used, you can just keep the Old Squid binary and it's configuration files just in case something goes wrong with the Squid-2.6 installation! The following installation steps might help: (1.) tar zxvf squid-2.6.STABLE14.tar.gz (2.) cd squid-2.6.STABLE14/ (3.) ./configure --bindir=/usr/local/sbin \ I'd rather do it as : ./configure --prefix=/usr/local/squid26 so that it puts all the squid 2.6 related files in single directory. for easier access of config files and binary and logs, I'd create the symlinks to my fav path. Just a point to share. --sysconfdir=/usr/local/etc/squid \ --datadir=/usr/local/etc/squid \ --libexecdir=/usr/local/libexec/squid \ --localstatedir=/usr/local/squid \ --enable-removal-policies=heap,lru \ --enable-storeio=diskd,aufs,coss,ufs,null \ --enable-snmp \ --enable-epoll \ --with-large-files \ --prefix=/usr/local \ --disable-ident-lookups \ --enable-underscores \ --with-large-files \ --disable-http-violations \ --enable-delay-pools \ --with-maxfd=8192 (4.) make all (5.) make install (6.) vi /usr/local/etc/squid/squid.conf (7.) /usr/local/sbin/squid -z (8.) /usr/local/sbin/squid -f /usr/local/etc/squid/squid.conf Note: Your compilation parameters may differ. Please adjust accordingly to your demands and needs. If your SUSE Linux box has installed and updated all the required development tools, then the installation should be a breeze! Remember to read the default squid.conf which comes with the new installation. Also check this out: http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE14-RELEASENOTES.html Happy Squid proxying with Squid-2.6STABLE14 !!! Thanking you... Thank you, Simsam Peter Albrecht <[EMAIL PROTECTED]> 09/05/2007 05:58 PM To squid-users@squid-cache.org cc Subject Re: [squid-users] webmails are not accessible - SQUID 2.5.STABLE12 Hi Simsam, I am still beginner in this field but I could tell you that the proxy itself is acting as a firewall, no specific protocol filtration and here is the acl for the SSL port: acl SSL_ports port 443 563 http_access deny CONNECT !SSL_ports acl Safe_ports port 443 563 # https, snews http_access deny !Safe_ports Is this your only http_access rule? That would mean you only allow https connections and no http connections. The machine hosting the squid is directly connected to the router, as I mentioned before it is the firewall also and no ACL are there! No it is not running in the transparent mode! Before deploying the SQUID, this webmail was normally opening. When trying to access a specific webmail like http://mailhost.ccc.com.om/mail it is giving the following: If you only allow https as mentioned above, that will always be denied. Do http connections to other servers work? Internet Explorer cannot display the webpage Most likely causes: You are not connected to the Internet. The website is encountering problems. There might be a typing error in the address. This does not look like a Squid message denying access ... Please send all your ACL and http_access rules from squid.conf so that we can have a look. Regards, Peter --
Re: [squid-users] squid cant cache flv (youtube)
On Mon, 27 Aug 2007, Kris wrote: so that means it`s impossible to cache youtube using squid now ? Yes It does appear so. FYI its not entire youtube content that's uncacheable, only the flash videos and some image files. you can get hit on rest of the thumbnail images and contents though. Manoj --
Re: [squid-users] squid cant cache flv (youtube)
On Sun, 26 Aug 2007, Kris wrote: Tried to put reload-into-ims but got same case , youtube still cant be cached. any clue ? already discussed this in another thread with subject "refresh patterns". in the full url of the youtube video, there are few parameters that are never same. so basically everytime you request for the same file, there url for it is different than the other one and thus will never get hit unless someone writes a workaround/patch to rewrite the urls while storing and looking up youtube videos. Manoj --
Re: [squid-users] refresh patterns!
On Mon, 13 Aug 2007, Adrian Chadd wrote: Yum! (Of course there's more to caching youtube - specifically, would need to implement a patch to squid to create a URI from that youtube URL which creates the same "host" part regardless of which bit of the CDN you fetch it from - using that URL for the cache storage and lookup. That'd be a pretty nifty start.) Ok so you mean its not that its not caching flashmedia but the url of the media file changes everytime you access it (request goes to different servers for same content).. 1187069957.907803 202.51.76.26 TCP_MISS/303 276 GET http://youtube.com/get_video?video_id=69M_1ow_yEg&t=OEgsToPDskLjJ2R2yzfUrzuuPjSq4-2Z 1187069963.328 3857 202.51.76.26 TCP_MISS/302 181 GET http://cache.googlevideo.com/get_video?video_id=69M_1ow_yEg 1187069973.085 1574 202.51.76.26 TCP_MISS/200 431 GET http://video.google.com/s?ns=yt&sourceid=y&sdetail=p%3A%2F&vid=kPCRaxHXMKD2NSrRYUYeegC&docid=69M_1ow_yEg&el=detailpage&nbe=0&st=0.667&et=0.667&len=104&rt=14.7&fv=WIN%209%2C0%2C47%2C0 1187070090.365 1026 202.51.76.26 TCP_MISS/303 276 GET http://youtube.com/get_video?video_id=69M_1ow_yEg&t=OEgsToPDskKQa32R7J750cPu_2LqiKdC 1187070091.880 1469 202.51.76.26 TCP_MISS/302 181 GET http://cache.googlevideo.com/get_video?video_id=69M_1ow_yEg 1187070100.866787 202.51.76.26 TCP_MISS/200 431 GET http://video.google.com/s?ns=yt&sourceid=y&sdetail=p%3A%2F&vid=yCOz1cNdFnuPWJytHrikSgC&docid=69M_1ow_yEg&el=detailpage&nbe=0&st=0.733&et=0.733&len=104&rt=11.092&fv=WIN%209%2C0%2C47%2C0 Above is the request to same video twice. the part in the url "&t=.." in youtube.com/get_video?.. and "&st=.." and "&et=.." and "&rt=.." in video.google.com/s?.. keep changing with every request. but I do wonder why the second url cache.googlevideo.com/get_video?.. got the miss. anyways, we will not be caching any of these youtube and googlevideo urls for now as we donot have much big cache space and those flash videos would use up the cache space with minimal or none chance of getting a hit, IMO. Please correct me if i'm wrong in this assumption. But I'm very interested in caching these urls if someone could pull off a patch as adrian suggested (maybe rip off those above tags from the url while storing and looking up off the cache store). I'll be increasing the storage space in the near future so it would be great to see such a patch.. ;) cheers. Manoj > relevant bits in my Squid config: # Cache dynamic content from youtube/etc # Let the clients favourite video site through acl youtube dstdomain .youtube.com cache allow youtube # NOW stop any other dynamic stuff being cached acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern -i \.flv$ 10080 90% 99 reload-into-ims maximum_object_size 32 MB maximum_object_size_in_memory 512 KB So, who wants a t-shirt for implementing the above patch and demonstrating it works with Youtube? I just don't have the time. Adrian --
Re: [squid-users] refresh patterns!
On Mon, 13 Aug 2007, Adrian Chadd wrote: I've put up the acl for them and yet everything else gets a hit except the flash media itself. [snip] access.log on second viewing of same media url : That didn't mean it didn't cache it, it means the object wasn't in cache. Turn off strip_query_terms so you can see which video URLs are being cached; then check store.log to see if that URL is being written to store as a cachable item. I put the same url in the browser immediately after finishing first playback run and still get the MISS. oh and i've never looked into the store.log and don't really know what I'd be looking for in it. Would need help figuring it out. these 2 pieces are of same time (same requests) from store.log and access.log. 1187009946.815 0 202.51.76.26 TCP_MEM_HIT/200 4374 GET http://video.google.com/ThumbnailServer2?app=vss&contentid=7f6c581e8547dd62&offsetms=0&itag=w160&lang=en&sigh=SLKWYkDxnwcJmw7Ex8xbFygCFr4 - NONE/- image/jpeg 1187009946.945 0 202.51.76.26 TCP_MEM_HIT/200 8345 GET http://video.google.com/ThumbnailServer2?app=vss&contentid=9cfb5a2b1e7f98ab&offsetms=0&itag=w160&lang=en&sigh=42_-hGXJKauCGsyYOZ6NaNr2w8A - NONE/- image/jpeg 1187009947.065 1 202.51.76.26 TCP_MEM_HIT/200 11206 GET http://video.google.com/ThumbnailServer2?app=vss&contentid=13b819cade2d97c9&offsetms=0&itag=w160&lang=en&sigh=johr_n0nAmLrdh4zC0OGZIuzlk8 - NONE/- image/jpeg 1187009947.134 7854 202.51.76.26 TCP_MISS/200 108327 GET http://video.google.com/googleplayer.swf?&videoUrl=http%3A%2F%2Fvp.video.google.com%2Fvideodownload%3Fversion%3D0%26secureurl%3DswLBQB908eYIPjBbQfhBn-u8W4co9zNri4n1WexhE1EJDz7JZjr53FS-qNy4_3hYoDpPVB-p3qyq9O1B3IEbSqIbyTbmrJu0ViCo4tLOfXAVaEmPQa1rH-v2bYVQm6oBPlwcYbmCXe_uF9dJGJUGBJFL_MgJ55DpXxDnskLvF4XtaHFQMzbzm4HO3vQquKxyvJs8V0SNLb9F2Lh5n5vYC5Xzg9W4PZfbaPuSxmi5iHM3gesnKizJ38Lrgl8QzaTiew%26sigh%3DiFYnY-pTKKZao_2TGBA7aLURWUY%26begin%3D0%26len%3D7597%26docid%3D933245742495114727&messagesUrl=http%3A%2F%2Fvideo.google.com%2FFlashUiStrings.xlb%3Fframe%3Dflashstrings%26hl%3Den&thumbnailUrl=http%3A%2F%2Fvideo.google.com%2FThumbnailServer2%3Fapp%3Dvss%26contentid%3D24e1827bb9aaafb4%26offsetms%3D0%26itag%3Dw320%26lang%3Den%26sigh%3DjUckg0J7gpA7ZMvO7x9ybTu5aN4 - DIRECT/209.85.169.103 application/x-shockwave-flash 1187009947.266 34 202.51.76.26 TCP_MEM_HIT/200 9683 GET http://video.google.com/ThumbnailServer2?app=vss&contentid=2446b6f35abc86db&offsetms=5000&itag=w160&lang=en&sigh=Nt4cOOratZaHNbhnGLNDLh9Ar9M - NONE/- image/jpeg 1187009947.335 0 202.51.76.26 TCP_MEM_HIT/200 10625 GET http://video.google.com/ThumbnailServer2?app=vss&contentid=7b76d0acb9731ebb&offsetms=0&itag=w160&lang=en&sigh=WKO3b8TQQsKv-WJYXSakZ-mRtnc - NONE/- image/jpeg 1187009947.493 6 202.51.76.26 TCP_MEM_HIT/200 5682 GET http://video.google.com/ThumbnailServer2?app=vss&contentid=1791dc60d50c96ad&offsetms=0&itag=w160&lang=en&sigh=oOKr_0hGs1A4jumyQ880QbEsnmc - NONE/- image/jpeg 1187009947.774125 202.51.76.26 TCP_MEM_HIT/200 8525 GET http://video.google.com/ThumbnailServer2?app=vss&contentid=4a2b8569b1771bc4&offsetms=15000&itag=w160&lang=en&sigh=Ote6sYEt8xRPEkdEe49TfkR-5xc - NONE/- image/jpeg 1187009947.831 61 202.51.76.26 TCP_HIT/200 1432 GET http://video.google.com/FlashUiStrings.xlb?frame=flashstrings&hl=en - NONE/- text/plain 1187009948.538682 202.51.76.26 TCP_MISS/200 302 GET http://video.google.com/videotranscript?frame=c&docid=933245742495114727&type=list - DIRECT/209.85.169.103 text/xml 1187009958.424 1496 202.51.76.26 TCP_MISS/200 279 GET http://video.google.com/s?ns=vss&docid=933245742495114727&sw=1&st=0.591&et=0.591&len=7.598&rt=9.463&fv=WIN%209%2C0%2C47%2C0 - DIRECT/209.85.169.104 text/html 1187009960.264 12032 202.51.76.26 TCP_MISS/200 348280 GET http://vp.video.google.com/videodownload?version=0&secureurl=swLBQB908eYIPjBbQfhBn-u8W4co9zNri4n1WexhE1EJDz7JZjr53FS-qNy4_3hYoDpPVB-p3qyq9O1B3IEbSqIbyTbmrJu0ViCo4tLOfXAVaEmPQa1rH-v2bYVQm6oBPlwcYbmCXe_uF9dJGJUGBJFL_MgJ55DpXxDnskLvF4XtaHFQMzbzm4HO3vQquKxyvJs8V0SNLb9F2Lh5n5vYC5Xzg9W4PZfbaPuSxmi5iHM3gesnKizJ38Lrgl8QzaTiew&sigh=iFYnY-pTKKZao_2TGBA7aLURWUY&begin=0&len=7597&docid=933245742495114727 - DIRECT/209.85.139.176 video/x-flv 1187009947.127 SWAPOUT 03 00019A3E 300906C560477BDC06F64F757F082A0F 200 1187009940 1186109575 1188219540 application/x-shockwave-flash 108010/108010 GET http://video.google.com/googleplayer.swf?&videoUrl=http%3A%2F%2Fvp.video.google.com%2Fvideodownload%3Fversion%3D0%26secureurl%3DswLBQB908eYIPjBbQfhBn-u8W4co9zNri4n1WexhE1EJDz7JZjr53FS-qNy4_3hYoDpPVB-p3qyq9O1B3IEbSqIbyTbmrJu0ViCo4tLOfXAVaEmPQa1rH-v2bYVQm6oBPlwcYbmCXe_uF9dJGJUGBJFL_MgJ55DpXxDnskLvF4XtaHFQMzbzm4HO3vQquKxyvJs8V0SNLb9F2Lh5n5vYC5Xzg9W4PZfbaPuSxmi5iHM3gesnKizJ38Lrgl8QzaTiew%26sigh%3DiFYnY-pTKKZao_2TGBA7aLURWUY%26begin%3D0%26len%3D7597%26docid%3D933245742495114727&messagesUrl=http%3A%2F%2Fvideo.google.com%2FFlashUiStrings.xlb%3Fframe%3Dflashstrings%26hl%3Den&thumbnailUrl=http%3A%2F%2Fvideo.googl
Re: [squid-users] refresh patterns!
On Mon, 13 Aug 2007, Amos Jeffries wrote: Haven't had luck with those media files. they just don't seem to be cached. I tried few suggestions on this list but didn't help. It'd be really nice if someone could provide working rules to cache those flash media from youtube, googlevideos etc... I wrote this into the wiki recently to cover the youtube case. Results with the other sites vary. http://wiki.squid-cache.org/ConfigExamples/DynamicContent I've put up the acl for them and yet everything else gets a hit except the flash media itself. config : # TAG: Caching Dynamic Contents acl youtube dstdomain .youtube.com acl googlevideo dstdomain .google.com .googlevideo.com cache allow youtube cache allow googlevideo # TAG: hierarchy_stoplist # A list of words which, if found in a URL, cause the object to # be handled directly by this cache. In other words, use this # to not query neighbor caches for certain objects. You may # list this option multiple times. Note: never_direct overrides # this option. #We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # TAG: cache # A list of ACL elements which, if matched, cause the request to # not be satisfied from the cache and the reply to not be cached. # In other words, use this to force certain objects to never be cached. # # You must use the word 'DENY' to indicate the ACL names which should # NOT be cached. # # Default is to allow all to be cached #We recommend you to use the following two lines. acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY #cache deny QUERY access.log on second viewing of same media url : 1186993611.227 1 202.51.76.26 TCP_HIT/200 1726 GET http://youtube.com/img/pic_globalnav_gradation_875x36.png - NONE/- image/png 1186993611.385276 202.51.76.26 TCP_HIT/200 27220 GET http://youtube.com/img/btn_gradient_orange_1x23.png - NONE/- image/png 1186993612.377 1333 202.51.76.26 TCP_MISS/302 181 GET http://cache.googlevideo.com/get_video? - DIRECT/74.125.15.28 - 1186993618.794667 202.51.76.26 TCP_MISS/200 431 GET http://video.google.com/s? - DIRECT/209.85.169.99 text/html If anyone can provide a list (even partial) of other common sites that it works for please let us know so I we update the wiki. Amos Manoj --
Re: [squid-users] refresh patterns!
On Sun, 12 Aug 2007, Adrian Chadd wrote: at one point, we had 45-49% Byte hit for about 2 months, then the squid server started rebooting frequently and hasn't been much stable since. its building up slowly and is increasing ... Hm, file bugzilla reports if you get crashes and stuff. Its more related to the power fluctuations rather than bugs of any sort. What about .flv ? flash media and flash video? Thought about rules for those? Haven't had luck with those media files. they just don't seem to be cached. I tried few suggestions on this list but didn't help. It'd be really nice if someone could provide working rules to cache those flash media from youtube, googlevideos etc... Adrian Manoj --
Re: [squid-users] refresh patterns!
On Sun, 12 Aug 2007, Adrian Chadd wrote: Do your users report issues with the heavy caching and the reload-into-ims? None so far. 35% byte hit rate is pretty nice though. at one point, we had 45-49% Byte hit for about 2 months, then the squid server started rebooting frequently and hasn't been much stable since. its building up slowly and is increasing ... Adrian Manoj --
Re: [squid-users] refresh patterns!
On Wed, 8 Aug 2007, Adrian Chadd wrote: G'day, My next question! What are people using as refresh_patterns for normal ISP forward caching? I'd like to put up a wiki page with a list of useful refresh patterns, especially if you've managed to enable caching of content such as streaming http media/flv, google earth, etc. Basically, anything you've got that increases Squid caching of traffic above the default of ~10% would be great. I'll summarise and write up a wiki article. Thanks! here's ours... refresh_pattern windowsupdate.com/.*\.(cab|exe) 4320 100% 43200 reload-into-ims refresh_pattern update.microsoft.com/.*\.(cab|exe) 4320 100% 43200 reload-into-ims refresh_pattern download.microsoft.com/.*\.(cab|exe) 4320 100% 43200 reload-into-ims refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i \.exe$ 10080 90% 99 reload-into-ims refresh_pattern -i \.zip$ 10080 90% 99 reload-into-ims refresh_pattern -i \.tar\.gz$ 10080 90% 99 reload-into-ims refresh_pattern -i \.tgz$ 10080 90% 99 reload-into-ims refresh_pattern -i \.mp3$ 10080 90% 99 reload-into-ims refresh_pattern -i \.ram$ 10080 90% 99 reload-into-ims refresh_pattern -i \.jpeg$ 10080 90% 99 reload-into-ims refresh_pattern -i \.gif$ 10080 90% 99 reload-into-ims refresh_pattern -i \.wav$ 10080 90% 99 reload-into-ims refresh_pattern -i \.avi$ 10080 90% 99 reload-into-ims refresh_pattern -i \.mpeg$ 10080 90% 99 reload-into-ims refresh_pattern -i \.mpg$ 10080 90% 99 reload-into-ims refresh_pattern -i \.pdf$ 10080 90% 99 reload-into-ims refresh_pattern -i \.ps$10080 90% 99 reload-into-ims refresh_pattern -i \.Z$ 10080 90% 99 reload-into-ims refresh_pattern -i \.doc$ 10080 90% 99 reload-into-ims refresh_pattern -i \.ppt$ 10080 90% 99 reload-into-ims refresh_pattern -i \.tiff$ 10080 90% 99 reload-into-ims refresh_pattern -i \.snd$ 10080 90% 99 reload-into-ims refresh_pattern -i \.jpe$ 10080 90% 99 reload-into-ims refresh_pattern -i \.midi$ 10080 90% 99 reload-into-ims refresh_pattern -i \.ico$ 10080 90% 99 reload-into-ims refresh_pattern -i \.mp3$ 10080 90% 99 reload-into-ims refresh_pattern -i \.bin$ 10080 90% 99 reload-into-ims refresh_pattern -i \.jpg$ 10080 90% 99 reload-into-ims refresh_pattern -i \.wmv$ 10080 90% 99 reload-into-ims refresh_pattern . 0 35% 6480 with 64GB cache_dir and 18MB max_obj_size, we get around 35% Byte hit and 48% request hit for client_http.requests = 141.127786/sec. Thanks Manoj Adrian --
Re: [squid-users] Regular Expression
[EMAIL PROTECTED]://www.main.example.org/@http://www.example.org/@r should work, but I'm not sure what does the 'r' modifier do... something squidguard-ish ? Yes this works fine. If i had a querystring like [EMAIL PROTECTED]://www.main.example.org/blbl/nbblbl/[EMAIL PROTECTED]://www.example.org/@r then the url is only http://www.example.org/ without the rest. I won't only to rewrite the subdomain into the domain. And the rest of the subdomain should be obtained. In my opinion the 'r' modifier means, that the client get "302 - moved temporarily". kind regards Enrico Please someone do explain what does that "r" modifier do. I've searched books and googled for it but couldn't find any referrence of the "r" modifier to substitute. Thanks Manoj --
Re: [squid-users] Cache is running out of filedescriptors
On Sat, 21 Jul 2007, Tek Bahadur Limbu wrote: Hi Henrik, Thanks for correcting me. By the way, I don't see the "--max-fd=NN" option with "./configure --help". there is an option --with-maxfd=N in 2.6. I will use this option in my Squid boxes running on Linux in the future. Thanking you... Manoj. --
Re: [squid-users] Possible? Cache images with different parameters after question-mark?
On Tue, 10 Jul 2007, Hermann-Marcus Behrens wrote: Hello, is it possible to force squid to cache an image, which never changes, but which is loaded with different parametes after a question-mark? Example: http://www.example.com/pixel.gif?page=index.html&rand=4125422 I use this to measure the traffic on my webpage. A Javascript loads the image with different parameters (idea borrowed from Google-Analytics) and later a small perl-script examines the log and does some accounting. The problem: The file "pixel.gif" never changes, but due to the question-mark and the changing parameters squid always connects to apache and fetches the same file (=pixel.gif) from the apache daemon. I would like to config the squid in a way, that it caches the file and serves it from memory. Is this possible? find the acl "QUERY" that matches cgi-bin and ?. just before this acl, add the acl to allow caching of the file. something like this: acl MYDOMAIN dstdomain .example.com (match to your specific need) cache allow MYDOMAIN acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY hope this works Greetings from germany, Hermann-Marcus Behrens --
Re: [squid-users] Creating a web admin site, suggestions?
On Mon, 9 Jul 2007, Jeff Pang wrote: 2007/7/9, Elijah Alcantara <[EMAIL PROTECTED]>: I was thinking of saving these rules to the database then if the user clicks on the apply button at the frontend the squid proxy will fetch all these rules from a text/config file that the system created from the database. you could read the config file itself into a BIG text box where you can modify whatever, and then when pressed "save" button will write the data of the text box back to the config file. just a suggestion. DONOT forget to make backup of config file before the webpage writes back the data. The only thing I can think is that you may run webserver with root since you need to modify squid.conf and execute 'squid -k reconfigure' You should not run apache as root effective user. just set permission on squid.conf to be writeable by effective user of webserver and setuid on squid binary and use a wrapper to run squid reconfigure. that should do. command.btw,parsing and redefining squid.conf by php is not easy,is it?Maybe perl is better choice. good luck. --
Re: [squid-users] Re: *** VIRUS *** [squid-users] Server Report
On Fri, 6 Jul 2007, Henrik Nordstrom wrote: tor 2007-07-05 klockan 13:19 +0545 skrev Manoj_Rajkarnikar: On Tue, 1 Jan 2002, [EMAIL PROTECTED] wrote: Please do something about it. found worm in a message... Now the filters have been hardened a bit further, with the sideeffect that most non-text attachments will get rejected, at least until there is a proper virus scanner running.. Thanks. Sure hope no other virus makes through to the list. And no, I didn't send that virus. I agree. Received: from squid-cache.org (ppp-124.120.133.107.revip2.asianet.co.th [124.120.133.107]) by squid-cache.org (8.14.0/8.13.6) with ESMTP id l642GdEo067087 for ; Tue, 3 Jul 2007 20:16:42 -0600 (MDT) (envelope-from [EMAIL PROTECTED]) Regards Henrik Manoj --
Re: [squid-users] Re: *** VIRUS *** [squid-users] Server Report
Hi neil. On Thu, 5 Jul 2007, Neil A. Hillard wrote: Hi, Manoj_Rajkarnikar wrote: On Tue, 1 Jan 2002, [EMAIL PROTECTED] wrote: WARNING: This e-mail has been altered by MIMEDefang. Following this paragraph are indications of the actual changes made. For more information about your site's MIMEDefang policy, contact Vianet System Administrator <[EMAIL PROTECTED]>. For more information about MIMEDefang, see: http://www.roaringpenguin.com/mimedefang/enduser.php3 Dropped document.scr (application/octet-stream) containing virus Worm.SCO.A-1. Please do something about it. found worm in a message... I seriously doubt Henrik sent out a worm and in any case, why are you reporting something that happened over 5 years ago? I too don't believe its Henrik. But it made it here from the list and it came yesterday not 5 years ago. FYI, here's the log entries for that mail. Jul 4 08:21:26 dns1 sendmail[19416]: l642XwNU019416: from=<[EMAIL PROTECTED]>, size=33846, class=0, nrcpts=1, msgi d=<[EMAIL PROTECTED]>, proto=SMTP, daemon=MTA, relay=squid-cache.org [12.160.37.9] Jul 4 08:21:26 dns1 mimedefang.pl[17467]: Found Worm.SCO.A-1 from 12.160.37.9 Jul 4 08:21:26 dns1 clamd[23763]: /var/spool/MIMEDefang/mdefang-l642XwNU019416/Work/msg-17467-48.scr: Worm.SCO.A-1 FOUND Jul 4 08:21:26 dns1 clamd[23763]: /var/spool/MIMEDefang/mdefang-l642XwNU019416/Work/msg-17467-48.scr: Worm.SCO.A-1 FOUND Jul 4 08:21:26 dns1 mimedefang.pl[17467]: MDLOG,l642XwNU019416,mail_in,,,<[EMAIL PROTECTED]>,<[EMAIL PROTECTED]> ,[squid-users] Server Report Jul 4 08:21:27 dns1 mimedefang.pl[17467]: filter: l642XwNU019416: drop_with_warning=1 Jul 4 08:21:26 dns1 sendmail[19416]: l642XwNU019416: Milter change: header Subject: from [squid-users] Server Report to *** VIRUS *** [squid-users] Server R eport Please disregard if this doesn't concern anyone. I wrote to the list because when a virus/worm is sent out the mailing list, its not one or ten or hundred users thats effected, its thousands or tens of thousands. Manoj. --
[squid-users] Re: *** VIRUS *** [squid-users] Server Report
On Tue, 1 Jan 2002, [EMAIL PROTECTED] wrote: WARNING: This e-mail has been altered by MIMEDefang. Following this paragraph are indications of the actual changes made. For more information about your site's MIMEDefang policy, contact Vianet System Administrator <[EMAIL PROTECTED]>. For more information about MIMEDefang, see: http://www.roaringpenguin.com/mimedefang/enduser.php3 Dropped document.scr (application/octet-stream) containing virus Worm.SCO.A-1. Please do something about it. found worm in a message... Thanks Manoj --