RE: [squid-users] OT: software to force the client to use the proxy
I am using squid with a block list. It works great for everyone on the LAN, but the issue that I am not able to effectively filter the internet for anyone who is not on the LAN without putting in some proxy settings. Is there software that could automatically set this up and lock the settings when not on the LAN? BOFH solution is to filter outbound connections to port 80 and 443 from all hosts except the proxy. Also look at; - Group Policy for windows domain machines - Consider interception (but it cant be used with authentication)
RE: [squid-users] speeding up browsing? any advice?!
thanks for the advice, i just increased cache size to 300 GB (i have 1 Terra raided hdd so i dont mind the size) as for object size i've set it to 15 MB. though one question, i've read that there's a certain option that keeps cached objects in memory for quick retrieval.. Usually the operating system does this for you, by caching some of the physical disks in RAM. For a forward proxy like you have setting a large cache_mem isnt recommended IIRC. i've got 6 GB of ram, so i dont mind doing so.. any advice? would it do good or .. ? The more ram the better. The OS should use it as disk cache as I mentioned above. Are you using a 64bit OS (better) or a 32 bit OS with PAE? If your OS reports a lot less that 6 gig you'll want to fix that. You might find running a cache only DNS server helps, as it should cut the lookup latency across the saturated link (tho it probably wont save much in the way for throughput).
RE: [squid-users] Squid 3.0.STABLE15 is available
The Squid HTTP Proxy team is pleased to announce the availability of the Squid-3.0.STABLE15 release! This release is a regular bug fix release. It contains a number of fixes for some older outstanding bugs. Changes to Note in this release are: - Regression Bug 2635: Incorrect Max-Forwards header type - Bug 2652: 'Success' error on CONNECT requests - Bug 2625: IDENT receiving errors - Bug 2610: ipfilter support detection - Bug 2578: FTP download resume failure - Bug 2536: %H on HTTPS error pages - Bug 2491: assertion age = 0 - Bug 2276: too many NTLM helpers running - Endian system and compiler fixes provided by the NetBSD project A few bugs are still open only due to a lack of feedback and testing on their patches. - Bug 2127 - NTLM crashes on delay pool class 4 - Bug 2648 - NTLM helpers stuck in RESERVED state Amos - does this mean that STABLE15 is will still have problems with misconfigured websites as per: http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/ ? FWIW the squid-3.0.14-chunk-encoding.patch applied in Gentoo has been effective for me. tnx, Adam
RE: [squid-users] squid in a 2 nic configuration
Essentially user1 connects to proxy on NIC1 port 3128, and squid queries the internet on NIC2 to bring in the data the user has requested from the internet. user 1 --- Nic1(squid) NIC2 Internet NIC2 - NIC1(squid) user Can anyone point me in the right direction to enable this functionality? Set NIC1 up such that it's on the same LAN as user 1. Set NIC2 up such that it's on the internet, with a default gateway that allows it to reach the internet. Optionally, restrict Squid so it only listens to the IP address assigned to NIC1. Yep definitely setup squid to that it only listens on NIC1 using; http_port nic1's ip address:3128 It's the easiest way to be sure no-one on the internet can browse your internal websites using the proxy. Also, if there are mutliple subnets on the inside of your network you'll need to add static routes to the proxy to cover each of them. This is really nothing to do with squid, just normal routing setup in the OS.
[squid-users] 2.7 to 3.0.13 upgrade issue
Gentoo has recently moved stable from 2.7 to 3.0.13, and i have found that www.skylinesaustralia.com now fails with both firefox and IE. The message from firefox is: Content Encoding Error The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. * Please contact the web site owners to inform them of this problem. Going direct works. 2.7 worked. Is this a known issue? Tnx rix etc # squid -v Squid Cache: Version 3.0.STABLE13 configure options: '--prefix=/usr' '--host=i686-pc-linux-gnu' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--datadir=/usr/share' '--sysconfdir=/etc' '--localstatedir=/var/lib' '--sysconfdir=/etc/squid' '--libexecdir=/usr/libexec/squid' '--localstatedir=/var' '--datadir=/usr/share/squid' '--with-default-user=squid' '--enable-auth=basic,digest,negotiate,ntlm' '--enable-removal-policies=lru,heap' '--enable-digest-auth-helpers=password' '--enable-basic-auth-helpers=DB,PAM,LDAP,SMB,multi-domain-NTLM,getpwnam,NCSA,MSNT' '--enable-external-acl-helpers=ldap_group,wbinfo_group,ip_user,session,unix_group' '--enable-ntlm-auth-helpers=SMB,fakeauth' '--enable-negotiate-auth-helpers=' '--enable-useragent-log' '--enable-cache-digests' '--enable-delay-pools' '--enable-referer-log' '--enable-arp-acl' '--with-large-files' '--with-filedescriptors=8192' '--enable-snmp' '--enable-ssl' '--disable-icap-client' '--enable-storeio=ufs,diskd,aufs,null' '--enable-linux-netfilter' '--enable-epoll' '--build=i686-pc-linux-gnu' 'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu' 'CC=i686-pc-linux-gnu-gcc' 'CFLAGS=-O2 -march=pentium3 -pipe -fomit-frame-pointer' 'LDFLAGS=-Wl,-O1' 'CXXFLAGS=-O2 -march=pentium3 -pipe -fomit-frame-pointer'
RE: [squid-users] Re: squid_ldap_auth and passwords in clear text
IMHO these days Ethernet eavesdropping really isn't much of an issue (despite conventional wisdom:-). Much more dangerous are spyware/trojan keyloggers; server penetration is annother danger. Eavesdropping on all network traffic from any connection used to be a big problem when network hubs repeated all traffic everywhere. Although Ethernet has changed hugely, the old paranoia remains. Any modern device is a switch (not a hub) and only directs traffic to the one port it's destined for, so nobody else can eavesdrop. Wrong. (Unless you run Cisco with DHCP snooping with Dynamic ARP Inspection, or similar) This will allow you to sniff on switches; http://ettercap.sourceforge.net/ http://www.monkey.org/~dugsong/dsniff/
RE: [squid-users] Strange RST packet
I've found that squid is sending a RST packet to a Windows station (WinXP SP2 or WinVista). Squid is not configured to send RST's. Is there any explication for this? Are you sure that the client is connecting to the correct port and that the service is running? The OS will typically respond to a SYN on a closed port with an RST.
RE: [squid-users] Someone's using my cache?
Yesterday, I wanted to get back to the cache and saw a great deal of traffic I/O on the cache but the weird part was that none of it was for or on my network. It looked like I've been used as some sort of payment gateway for a short while :). Anyhow, I do have firewall security in place, Assuming the squid box is inside your firewall then your firewall policy is incorrect. It should not allow connections from the internet to your squid box. Depending on how your network's setup that's usually the simplest thing to change. Or if you're squid is dual homed, stop squid from running on the dirty interface by specifying the internal interface only; #http_port 3128 http_port 192.168.1.1:3128 Or otherwise you'll need to setup an ACL listing all your internal networks and restrict access to that only.
[squid-users] OT - average HTTP packet size
Does anyone have a ballpark on this? It looks like one of our internal firewalls which hosts a number of DMZs is seeing an average of 400 bytes per packet. The majority of traffic is HTTP or HTTPS. Is this normal? tnx
RE: [squid-users] Reverse - Apache - Syn Flood
Connection flooding is worse.. and requires offending clients to be blacklisted by firewalling once identified. If it's a botnet, there can be tens of thousands of hosts, so blacklisting can be difficult. Also, unless you have a multi-gigabit connection then they can just fill your pipe with whatever garbage they like and your only option then is to ask your ISP to try to filter it. There are also specialist anti-DDoS services with 10gig connections that act as a front end to your site to filter out the garbage then forward the real connections to you. You probably need to do a risk assesment to see whether its worth spending the money to defend against botnets.
RE: [squid-users] binary install of squid
my team would like to download a binary version for solaris of squid 3.0 . Does anybody know of such a download site/ url ?? Maybe try this... http://cooltools.sunsource.net/coolstack/ JD Or http://www.sunfreeware.com/ or http://www.blastwave.org/ (where you also get pkg-get, a solaris clone of apt-get)
RE: [squid-users] Advantages of Squid
tc is a linux tool to create network classes that you can route/mangle/prioritize, it's not Squid specific and won't work with any other OS, but i used it once in a setup to route TCP_REFRESH_HIT objects to a different (much faster link), so they can have a faster If-Modified-Since request/reply. pretty tricky and complex Tc is traffic control from iproute2. Iproute2 replaces the ifconfig and route commands with the ip and tc commands. Iproute2 offers more functionality than the ifconfig/route, especially for QoS via tc.
RE: [squid-users] Where do you put this sentence?
I'm using diskd, I found this http://wiki.squid-cache.org/SquidFaq/DiskDaemon and says ipcs | awk '/squid/ {printf ipcrm -%s %s\n, $1, $2}' | /bin/sh in Sometimes shared memory and message queues aren't released when Squid exits. I'm using linux, where I should put that sentence? Thanks a lot Best Regards You run it at a root shell. But did you notice on that page; Modern Linux systems the Disk Daemon has been trumped by extremely fast AUFS. diskd is still recommended for BSD variants. So since you're running linux change diskd to aufs in your cache_dirs, and restart. The format on disk is the same so you wont loose your content. Then don't worry about the ipcs command...
RE: [squid-users] Adding secondary Disk for Cache
Assuming your disk is attached, your OS recognizes it and the disk is formatted: 1) Ensure the effective_squid_user has write capability on the mount point 2) Add a cache_dir directive to squid.conf referencing the new mount point 3) Stop squid 4) Run squid -z (as root or as the effective_squid_user) 5) Start squid Step 0) Consider the implications on RAM or adding more cache_dir :-) You might want to reduce cache_mem or add more RAM. http://wiki.squid-cache.org/SquidFaq/SquidMemory#how-much-ram
RE: [squid-users] Adding secondary Disk for Cache
Step 0) Consider the implications on RAM or adding more cache_dir :-) You might want to reduce cache_mem or add more RAM. http://wiki.squid-cache.org/SquidFaq/SquidMemory#how-much-ram Sorry about the typo - it should be Consider the implications on RAM OF adding more cache_dir
RE: [squid-users] Urgent Help Needed :Two Squid Servers performance issue when working with NAT
Thanks a lot for your response . I used sniffer tool to catch the packet in both Poly graph Server (10.56.233.99) and Squid server side (198.18.24.3). I could see 198.18.24.3 send out SYNs, they SYNs were also could be captured in PolyServer(10.56.233.99) side , but no ack were genenated by the 10.56.233.99 server . Ok, if you cant see the SYN/ACK from 10.56.233.99, it could be; - if the box is multi-homed the SYN/ACK may be being routed out a different interface - the traffic may be being dropped by iptables (which sits between tcpdump and the OS) Double check (or maybe post) your iptables configuration. Also send the output of netstat -ant | grep 198.18.24.3
RE: [squid-users] Urgent Help Needed :Two Squid Servers performance issue when working with NAT
part of netstat -na in squid2 output like following: tcp0 1 198.18.24.3:46304 10.56.233.99: SYN_SENT This shows that 198.18.24.3 cant communication with 10.56.233.99, so assuming no firewalling, you have a routing problem (which could be a NAT problem). Run a sniffer on 10.56.233.99, - if you don't see the SYNs coming in, then 198.18.24.3 cant route to 10.56.233.99 - if you see the SYNs come in and 10.56.233.99 reply with syn/ack, then you have a routing problem from 10.56.233.99 to 198.18.24.3. Remember you need to have 2 routes to get TCP working - one to the server and one to the client. If you NAT then you'll need route(s) for the NATed addresses as well.
RE: [squid-users] Squid in the Enterpise
I agree. But we have infrastructure problems that really push hard to make it a single ip. We'll be doing WCCP and standard proxy. But a large number of the clients have hardcoded proxy ips and make it prohibitive to change it to a new address. So you have two options: - setup this hardcoded address as a VIP on a layer7 switch - use a clever NAT than can round robin the translated destination address
RE: [squid-users] Squid in the Enterpise
I am running into the standard Open Source fear at my local site. Ask the fearmongers if they've ever heard of a little piece of software called BIND, or maybe Apache... Also, you should probably get pricing on commercial squid support, to let management know that it can be had and how much it costs. Can anyone name some major companies that use Squid. We are talking enterprise or ISP here. We currently have about 100,000 users with heavy streaming video use. Some of the management are afraid Squid will not be able to handle the load. Our planned deployment box is a 8-way, 16GB ram, 1TB (6 disks I think) server which will be running RedHat Enterprise Linux. If you do anything important with web at your business then a single box will not be able to meet basic reliablilty requirements. So you have to run multiple boxes for reliability, and you can scale horizontally for performance. Ie just add enough boxes to meet your service level requirement. There's a numner of ways to load balance, WCCP, proxy.pac, Layer7 switch etc.
RE: [squid-users] Squid in the Enterpise
you should bear in mind that for a cache to be truly effective at bandwidth conservation (if that is your goal) it needs to be placed close to the users. Maybe - it depends if you want to save bandwidth on your LAN or WAN/Internet pipe. AFAIK most organisations are more concerned about WAN utilisation since it's the expensive bit, and therefore placing the caches just on the internal side of your WAN can be a good solution.
RE: [squid-users] Squid in the Enterpise
Our planned deployment box is a 8-way, 16GB ram, 1TB (6 disks I think) server which will be running RedHat Enterprise Linux. There's been some recent list discussions about how squid uses CPU - you'd be much better off with 4 load balanced dual core boxes than one 8 core box. RAM is cheap so put 16gig in all four :-) Just make sure you install the 64 bit kernel.
RE: [squid-users] Squid on steroids
The hard part is going to be directing requests to the proxies, and handling failure well. I haven't done ISP proxy deployments in a long time, so I'll leave it to others to give you advice on that part. I'm assuming you'll want it to be transparent (e.g., use WCCP)? If transparent, WCCPv2 has cache failure detection and load balancing. I imagine it would be the easiest/cheapest method if your routers support it.
RE: [squid-users] Failure URL
I currently have a set of rules such that a certain range of IP addresses have ZERO internet access. However, I would like to use the Failure URL feature to send a customized message to the users at these denied IP addresses. The problem seems to be, since they have no access they can't get to the failure URL. Something of an infinite loop. Do you mean no access to the internet or to the proxy? If you mean no access to the internet you could use WCCP on a router that sits somewhere along the default route path to intercept the request and send it to squid where you would have an ACL that captures the requests and presents the failure page. I think we need more info - are you using interception/proxy.pac etc
RE: [squid-users] Squid2-only plugin from Secure Computing
I think SmartFilter patches the squid source, so is tied to specific versions. It certainly adds another option to the configure script. You can download it for free from SecureComputing's website and have look. Sorry I cant be more helpful but I'm not a developer. Smartfilter 4.2.1 works with squid 2.6-17. http://www.securecomputing.com/index.cfm?skey=1326 FYI: We have started talking to Secure Computing regarding Squid3 compatibility of the SmartFilter plugin. I will keep you updated. Thanks Alex, good to hear. Hopefully you can some up with a model that will allow us to apply squid bigfixes without compromising SecureComputing support.
RE: [squid-users] acl from file
I have a huge txt file with domains that I want to ban, like this: .dom.com .dom2.net .etc I not sure I i can do this at my acl configuration acl banneddommains dstdomain /path/file.txt RTFM :-) From squid.conf; # TAG: acl # Defining an Access List # # acl aclname acltype string1 ... # acl aclname acltype file ... # # when using file, the file should contain one item per line So you just need to put quotes ie. acl banneddommains dstdomain /path/file.txt
RE: [squid-users] No memory left, buffers eats all ram. Is anysolution?
I have sever with 8GB memory ps aux shows that squid is using max 3467800. Are you running a 64bit OS and 64bit squid?
RE: [squid-users] block chat
i m setting up squid proxy to block gtalk msn, etc... i found through internet to block port 5223 5222 for gtalk i tried to block by acl block_port 5223 5222 but it didnt block plz guide me to block these chat thansks squid only can do something when those are tunelled through squid via CONNECT requests or accessed via squid using HTTP (not HTTPS) protocol. That would require building a list of sites, hosts and ports and mainting it. Otherwise, you need content inspector, which hopefully can do detect what protocol is used. Assuming HTTP tunnelling, SmartFilter (from Secure Computing) has an IM category. I don't know if it is granular enough to configure different IM types to block, ie it might block all IMs or none.
RE: [squid-users] Can squid re-load any caches into memory from thedisk cache.
Does anyone know how to re-load the object into the memory from the disk cache? At the moment? You have to expire the object and refetch it. So if an object gets written to disk, then subsequently becomes frequently requested, will this compromise performance as the object must now be pulled from disk every time? Following on from that - would a smaller cache_mem, which will allow the OS to perform more disk caching, potentially perform better than a larger (but still sensibly sized) cache_mem?
RE: [squid-users] RAID is good (was: Re: [squid-users] Hardwaresetup ?)
Recently I've spent a fair bit of time benchmarking a Squid system whose COSS and AUFS storage (10GB total) + access logging are on a RAID0 array of two consumer grade SATA disks. For various reasons, I'm stuck with RAID0 for now, but I thought you might be interested to hear that the box performs pretty well. I don't think anyone will be interested in RAID0, as Squid's simultaneous access of each cache_dir on different disks is loosely analogous to RAID0. RAID1 on the other hand is very interesting. Some initial experiments suggest that removing RAID doesn't particularly improve performance, but I intend to do a more thorough set of benchmarks soon. Following on from my comment above, a single 20gig RAID0 cache_dir is probably not that much different to two 10gig cache_dirs on single disks. If using aufs then the RAID0 would only run as a single thread so that may adversely affect performance. I guess that RAID0 would offer a worse seek time from squid's perspectiv as each request from squid is serialised, but data transfer rate will be higher for a particular object. I imagine squid is more sensitive to seek than throughput. I'm just speculating on all this... Also, are you using the noatime mount option with reiserfs? Do you know what your 600 - 700 Req/Sec Polygraph polymix-4 benchmark is in Mbps?
RE: [squid-users] Using a parent cache for content filtering only
I disabled the parent cache and tested the speed and it was a remarkable difference. Performance problems on the parent? Using a parent in another country would effect latency but shouldn't effect throughput.
RE: [squid-users] How can I tell if snmp has been compiled intoSquid?
Is there a command I can run on Squid to see what options have been compiled in? Run squid -v and look for '--enable-snmp' in the output
RE: [squid-users] Squid2-only plugin from Secure Computing
I would be happy to try to resolve this issue with Secure Computing. However, I need more information: - What exactly is the Secure Computing plugin that supports Squid2 and does not support Squid3? Does it have a name and a version number? I think SmartFilter patches the squid source, so is tied to specific versions. It certainly adds another option to the configure script. You can download it for free from SecureComputing's website and have look. Sorry I cant be more helpful but I'm not a developer. Smartfilter 4.2.1 works with squid 2.6-17. http://www.securecomputing.com/index.cfm?skey=1326
RE: [squid-users] Squid Future (was Re: [squid-users] Squid-2,Squid-3, roadmap)
My 2c WRT 2 v 3 etc; - We currently run commercial proxies and are looking to replace them with squid boxes, however recent list discussion is making me a little nervous. I would have used 2.6 for performance (need to support 10K users) and for - Secure Computing's Smartfilter. It currently runs on 2.6-17. Do the Squid3 devs have any contact with Secure Computing about Smartfilter coming to v3? Has there been any contact about v2 in the past? Also, while I think MP would be nice, its easy enough to load balance across multiple boxes with proxy.pac or WCCP or a layer 7 switch, so its not a killer feature IMO. I'd be much more interesting in have 3 brought up to parity with 2 rather than working on extra features. Rgs, Adam
RE: [squid-users] I want to purge too many TIME_WAITs immediatelyafter closing HTTP port.
It sounds like the problem is source port exhaustion, for the outgoing sessions that squid creates. Why do you consider the TIME_WAIT as such to be a problem? There is no significant problem to have some hundreds of thousand TIME_WAIT sockets on a server port. Wouldn't there be a 65,536 limit as you can only have that many open ports? And if you were to hit that limit the only way to support more concurrent established or time wait connections would be to install another box? I think most distros restrict the source port range to a lot less than 65536 by default, for example my gentoo laptop has 28232 available; ie [EMAIL PROTECTED] ~ $ cat /proc/sys/net/ipv4/ip_local_port_range 32768 61000 So you should be able to open that up with sysctl to make more ports available.
RE: [squid-users] round robin DNS and the occassional failing IP.
dig +recurse +additional +authority +notrace A google.com.au (which I freely admit I could be using wrong, or my upstream ncsd server on the host I am on now and exhibited these problems before, could be silly) I think it would be highly unlikely that google would be advertising a dead server in its DNS for months. I would look at your DNS more closely than squid. My guess is that it (or your ISPs DNS) is not expiring the old record correctly, but you'll need to check each point involved in DNS to find where the issue is. IIRC some US ISPs are known for breaking DNS caching rules, presumably to reduce the load on their DNS. Perhaps squid could perform more cleverly in the event of a failure of this type, but to me that would be providing a band-aid to the underlying problem. # tcpdump dst port 80 ... 23:15:39.943113 IP scuzzie-home.42682 ro-in-f104.google.com.www: SWE 1063381097:1063381097(0) win 5840 mss 1460,sackOK,timestamp 69844178 0,nop,wscale 7 I think 'tcpdump port 80' would be better so you can also see any responses that may come from the webserver, which will be source port 80. Hopefully squid is clever enough to try the next IP if it recieves a reset. From the timestamps I'm guessing nothing is coming back.
RE: [squid-users] Squid currently not working.
I suggest you check your IPTABLES rules for opennig the squid port. may be closed Iptables could stop you from accessing the port, but couldn't stop squid from opening the port in the first place. Its not an iptables issue IMO. I did have SELinux installed onto it. For Nima, I didn't know how to check the IPTables so I went back to a post I made on fedoraforum.org before about FreeNX. I used these two commands: snip So I changed the port to 3128(the default I do believe) then I tried to start squid using service squid start. What do you know, it started. Don't know why it did it but it did. Currently though, if I try to change the port from 3128 back to port 81(the only one I currently know of that is open fully in my school) it will not start squid. Any ideas with this problem? Not being able to run on a port 1024 is typcially a permissions issue. Probably SELinux, but I havent used it so I cant help there. Suggest you ask the question in a Fedora or SELinux forum.
RE: [squid-users] Squid currently not working.
Subject: Re: [squid-users] Squid currently not working. [EMAIL PROTECTED] ~]# ps aux |grep squid root 16205 0.0 0.0 4044 680 pts/2S+ 13:14 0:00 grep squid I also went through squid.conf to eliminate most of the comments(assuming # lines are comments(pretty sure about this)). Here is whats inside: Earlier the SELinux question was posed - did you enable that? It's a Fedora install option. Also, temporarily change http_port to something above 1023 - just to see if it will start.
RE: [squid-users] Squid currently not working.
Where are the log files that I am supposed to be looking at? They are defined in squid.conf, eg on my system; [EMAIL PROTECTED] ~ $ grep cache.log /etc/squid/squid.conf # TAG: cache_log cache_log /var/log/squid/cache.log # cache.log log file is written with stdio functions, and as such # message to cache.log. You can allow responses from unknown # If set to warn then a warning will be emitted in cache.log [EMAIL PROTECTED] ~ $
RE: [squid-users] Squid currently not working.
FATAL: Cannot open HTTP Port Squid Cache (Version 2.6.STABLE16): Terminated abnormally. Supposedly by what this says, the port can't be opened. I made sure that the firewall had it opened and that my router was forwarding it. Its not a firewall thing, its the operating system not allowing squid to open that port. Either the port is already in use, or squid doesn't have the correct privilages to open the port. Typically you need to be root open a port 1024. As root, use 'netstat -anp | grep 81' to check if its in use and what is using it. I use port 8080 for squid; rix adam # netstat -anp | grep 8080 tcp0 0 192.168.1.4:80800.0.0.0:* LISTEN 11852/(squid) rix adam #
RE: [squid-users] Squid currently not working.
Are you running it as root? I's say he is - I have a fedora 8 box (squid is not actually used on this box AFAIK); [EMAIL PROTECTED] ~]$ service squid start sed: can't read /etc/squid/squid.conf: Permission denied init_cache_dir /var/spool/squid... /etc/init.d/squid: line 68: /var/log/squid/squid.out: Permission denied Starting squid: /etc/init.d/squid: line 72: /var/log/squid/squid.out: Permission denied [FAILED] [EMAIL PROTECTED] ~]$ su Password: [EMAIL PROTECTED] cartera]# service squid start init_cache_dir /var/spool/squid... Starting squid: . [ OK ] [EMAIL PROTECTED] cartera]# Steve, can you post the output of 'netstat -anp | grep 81' (it should find nothing).
RE: [squid-users] Squid currently not working.
So now I am currently in the jam of finding out why it is currently not working correctly. cache_log seems like a good place to start looking. What OS is this?
[squid-users] Hardware sizing
Hi All, Our current proprietory webcaches push about 100Mbps and are due for replacement, so we're looking at Squid. Assuming Lintel platform, what spec of hardware would provide, say 2-3 times that performance? We run LDAP authentication, complex ACLs and SmartFilter. Cheers, Adam