[squid-users] squid 3.1.0.15, TPROXY; cache.log empty

2010-02-23 Thread Rhino

my system:
Squid 3.1.0.15 (run/installed as "squid3")
WCCP v2 HASH (Cisco switch)
centOS 5.4 kernel 2.6.30.10 w/TPROXY enabled
iptables v1.4.4

WCCP established between squid and switch.
TPROXY iptables rules set to forward tcp port 80 to 3128

From a client on a separate test subnet can browse internet via squid 
server (TCPDUMP of squid interface shows the client http traffic being 
served via internet); however there are zero entries in access.log
When WCCP is turned off on the switch, the client requests no longer are 
visible via TCPDUMP.

Squid starts, runs as you would expect it to, no visible errors.
If WCCP is working, the IPTABLES TPROXY rules appear to be working - as 
the client IP  is revealed at numerous IP verification websites - then I 
am led to believe I've botched my squid config somehow; though when 
previously tested in intercept only (no TPROXY rules, just NAT and port 
80 redirect) access.log populated with the client requests.


No doubt something pretty basic that I must have overlooked, but for the 
life of me not seeing what it may be.

Any thoughts?
Including the cache.log debug below
thanks
-Ryan

[r...@proxy squid3]# squid -X start
2010/02/23 13:55:34.707| command-line -X overrides: ALL,7
2010/02/23 13:55:34.707| CacheManager::registerAction: registering 
legacy mem

2010/02/23 13:55:34.707| CacheManager::findAction: looking for action mem
2010/02/23 13:55:34.707| Action not found.
2010/02/23 13:55:34.707| CacheManager::registerAction: registered mem
2010/02/23 13:55:34.707| CacheManager::registerAction: registering 
legacy squidaio_counts
2010/02/23 13:55:34.707| CacheManager::findAction: looking for action 
squidaio_counts

2010/02/23 13:55:34.707| Action not found.
2010/02/23 13:55:34.707| CacheManager::registerAction: registered 
squidaio_counts
2010/02/23 13:55:34.707| CacheManager::registerAction: registering 
legacy diskd

2010/02/23 13:55:34.707| CacheManager::findAction: looking for action diskd
2010/02/23 13:55:34.707| Action not found.
2010/02/23 13:55:34.707| CacheManager::registerAction: registered diskd
2010/02/23 13:55:34.707| aclDestroyACLs: invoked
2010/02/23 13:55:34.707| ACL::Prototype::Registered: invoked for type src
2010/02/23 13:55:34.707| ACL::Prototype::Registered:yes
2010/02/23 13:55:34.707| ACL::FindByName 'all'
2010/02/23 13:55:34.707| ACL::FindByName found no match
2010/02/23 13:55:34.707| aclParseAclLine: Creating ACL 'all'
2010/02/23 13:55:34.707| ACL::Prototype::Factory: cloning an object for 
type 'src'

2010/02/23 13:55:34.707| aclIpParseIpData: all
2010/02/23 13:55:34.707| aclIpParseIpData: magic 'all' found.
2010/02/23 13:55:34.707| aclParseAccessLine: looking for ACL name 'all'
2010/02/23 13:55:34.707| ACL::FindByName 'all'
2010/02/23 13:55:34.707| Processing Configuration File: 
/usr/local/squid3/etc/squid.conf (depth 0)

2010/02/23 13:55:34.708| Processing: 'acl manager proto cache_object'
2010/02/23 13:55:34.708| ACL::Prototype::Registered: invoked for type proto
2010/02/23 13:55:34.708| ACL::Prototype::Registered:yes
2010/02/23 13:55:34.708| ACL::FindByName 'manager'
2010/02/23 13:55:34.708| ACL::FindByName found no match
2010/02/23 13:55:34.708| aclParseAclLine: Creating ACL 'manager'
2010/02/23 13:55:34.708| ACL::Prototype::Factory: cloning an object for 
type 'proto'

2010/02/23 13:55:34.708| Processing: 'acl localhost src 127.0.0.1/32'
2010/02/23 13:55:34.708| ACL::Prototype::Registered: invoked for type src
2010/02/23 13:55:34.708| ACL::Prototype::Registered:yes
2010/02/23 13:55:34.708| ACL::FindByName 'localhost'
2010/02/23 13:55:34.708| ACL::FindByName found no match
2010/02/23 13:55:34.708| aclParseAclLine: Creating ACL 'localhost'
2010/02/23 13:55:34.708| ACL::Prototype::Factory: cloning an object for 
type 'src'

2010/02/23 13:55:34.708| aclIpParseIpData: 127.0.0.1/32
2010/02/23 13:55:34.708| aclIpParseIpData: '127.0.0.1/32' matched: 
SCAN3-v4:


%[0123456789.]/%[0123456789.]
2010/02/23 13:55:34.708| Ip.cc(517) FactoryParse: Parsed:

127.0.0.1-[::]/[:::::::](/128)
2010/02/23 13:55:34.708| Processing: 'acl localhost src ::1/128'
2010/02/23 13:55:34.708| ACL::Prototype::Registered: invoked for type src
2010/02/23 13:55:34.708| ACL::Prototype::Registered:yes
2010/02/23 13:55:34.708| ACL::FindByName 'localhost'
2010/02/23 13:55:34.708| aclParseAclLine: Appending to 'localhost'
2010/02/23 13:55:34.708| aclIpParseIpData: ::1/128
2010/02/23 13:55:34.708| aclIpParseIpData: '::1/128' matched: SCAN3-v6:

%[0123456789ABCDEFabcdef:]/%[0123456789]
2010/02/23 13:55:34.708| Ip.cc(517) FactoryParse: Parsed:

[::1]-[::]/[:::::::](/128)
2010/02/23 13:55:34.708| aclIpAddrNetworkCompare: compare:

127.0.0.1/[:::::::] (127.0.0.1)  vs

[::1]-[::]/[:::::::]
2010/02/23 13:55:34.708| aclIpAddrNetworkCompare: compare:

[::1]/[:::::::] ([::1])  vs

127.0.0.1-[::]/[:::

[squid-users] intermittent timeouts Cisco 4948 swtich, WCCPv2, Squid 2.6stable12]

2008-05-21 Thread Rhino




Have WCCPv2 running between Cisco 4948 gigE switch and Squid on Linux 
server (WCCPv2 is working fine, see redirects on TCPDUMP).

Routing incoming WCCP redirects to ETH0 and outgoing to ETH1 on server.
Squid starts without error and performs well for about 20 minutes; then 
some web pages time out indiscriminately and customers must refresh 
several times ("address not valid" error appears in browser).


Don't see any errors in the access.log

Approximately 7500 customers can be hitting the Squid server during 
heavy use, but the box has more than adequate memory and disk space to 
accomodate those numbers from what I've read. Could the page time-out 
errors be due to DNS settings?


Any help/recommendations are appreciated.
thanks
-Ryan

Setup Details below:

Squid Server:
GNU/Linux kernel 2.6.19.7
4-AMD dual-core 2.6 gig Opteron processors
32 gig DDR2 RAM
4-28 gig cache drives
Cisco 4948 switch running 12.2(40)SG

Squid server ETH0 > Cisco 4948 switch WCCPv2 vlan port
Squid server ETH1 > Cisco 4948 switch INTERNET vlan port

IPTABLES PREROUTING 0.0.0.0/0 port 80 to 0.0.0.0/0 port 3124


http_port xxx.xxx.xxx.xxx:3124 transparent
http_port localhost:
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl our_networks src xxx.xxx.xxx.xxx/19 xxx.xxx.xxx.xxx/19
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 16 GB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 4096 KB
memory_replacement_policy lru
#memory_replacement_policy LFUDA
cache_dir aufs /squid0 285520 16 256
cache_dir aufs /squid1 285520 16 256
cache_dir aufs /squid2 285520 16 256
cache_dir aufs /squid3 285520 16 256
dns_nameservers xxx.xxx.xxx.xxx xxx.xxx.xxx.xxx
positive_dns_ttl 1 minute
negative_dns_ttl 1 second
logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %

[squid-users] JEDI unsupported http request error

2008-05-29 Thread Rhino


In my cache.log I get flooded with the following error:
parseHttpRequest: Unsupported method 'JEDI'
From what I've been able to find, JEDI must refer to "Joint Endeavour 
of Delphi Innovators".  Has anyone encountered this, suggest a 
workaround, etc.?

Running  SQUID 2.7.STABLE1-20080528 on Debian 2.6.19.7
thnx
-Ryan



[squid-users] helperOpenServers: "-s is not supported on this resolver"

2008-07-10 Thread Rhino
Running squid-2.7.STABLE1-20080528 on Debian linux  2.6.19.7 kernel 
using wccp2 and iptables for transparency.
Squid was configured with --disable-internal-dns and have "dns_children 
96" "dns_defnames off" and "dns_nameservers xxx.xxx.xxx.xxx 
xxx.xxx.xxx.xxx" in my squid.conf file.
Put into production approx. 2 weeks ago, approx. 10k customers are 
browsing transparently without any complaints of latency, and we're 
seeing a measurable incoming bandwidth savings on our ISP links - so 
things seem to be performing well.
Question concerns the cache.log entries I see when starting squid - 
immediately following "helperOpenServers: Starting 96 'dnsserver' 
processes" I get several log lines which read "-s is not supported on 
this resolver".  Where would this flag be set, and how do I modify the 
startup config to avoid the error?

Sure it's something simple, but I'm not finding it.
Appreciate your help.
.
-Ryan







[squid-users] empty core files

2008-07-10 Thread Rhino
Running squid-2.7.STABLE1-20080528 on Debian linux  2.6.19.7 kernel 
using wccp2 and iptables for transparency.
Put into production approx. 2 weeks ago, approx. 10k customers are 
browsing transparently without any complaints of latency, and we're 
seeing a measurable incoming bandwidth savings on our ISP links - so 
things seem to be performing well.  However I've discovered a number of 
completely empty core files in my squid logs directory.  Some are as 
little as 3 minutes apart.  Could an exploited vulnerability be 
generating these?

Appreciate the help.
-Ryan




Re: [squid-users] squid in ISP

2008-07-11 Thread Rhino

Siu-kin Lam wrote:
Dear all 


Any experience using squid as caching in ISP environment ?


thanks 
SK 



  





I'm sure there's much larger ISPs out there and been using it much longer;
just passing along our info.
We're a small ISP serving around 10k dialup,dsl,cable modem and MAN subs
via a dual-homed to different ISP BGP WAN.
We loaded squid on a quad core linux box with around 1.2Tb disk
capacity and 32Gb RAM, using a Cisco 4948 switch and WCCP2
to transparently redirect to Squid.
There were some major hurdles along the way
mostly getting the 4948 to pass the L2 WCCP traffic -
2 IOS bugs and a year in the process) but once that worked
and we got our IPTABLES set up properly, transparent redirection
has been working quite well.
Some tweaks needed to our Squid config, but with the help of this list
 - particularly Henrik and Amos' posts - at this point we're very
encouraged by the performance and bandwidth savings we're seeing on the
system which has only been truly active for around 3 weeks now.
Again, we're a pretty small shop - so when our old NetApp Netcache
was no longer able to adequately handle the load, we needed an
effective, minimal-cost solution which this is demonstrating to be.
Hope that helps.
-Ryan


[squid-users] Recommended cache_dir config for large system

2008-07-16 Thread Rhino


Have Squid2.7STABLE1 running on Debian Linux box with 4 278 GB drives 
allotted for caching.

System has 32 gig RAM, and serves approx. 10k users.
Seeking input as to the most effective config options on this system in 
order to reduce latency

and maximize throughput.
cheers
-Ryan



[Fwd: Re: [squid-users] Recommended cache_dir config for large system]

2008-07-16 Thread Rhino




Henrik Nordstrom wrote:

On ons, 2008-07-16 at 13:20 -0500, Rhino wrote:
Have Squid2.7STABLE1 running on Debian Linux box with 4 278 GB drives 
allotted for caching.

System has 32 gig RAM, and serves approx. 10k users.
Seeking input as to the most effective config options on this system in 
order to reduce latency

and maximize throughput.


Ample amounts of filedescriptors. Give Squid 32K filedescriptors or so
(--max-fd configure option, or max_filedescriptors config option in
squid.conf 2.7)

maxed number of outgoing TCP ports. (ip_local_port_range sysctl)

persistent connections enabled (default)

aufs cache_dir type, with two cache_dir per drive. L1=128, L2=256.

Make sure to leave some GB of ram for OS + fs cache. Don't be too
agressive about the Squid memory usage.

Do not configure a swap. Instead leave sufficient margin in the memory
usage..

Should also give some advice on which filesystem to use, but ext3 is
probably fine. reiserfs is also a good candidate but may need the notail
mount option.

noatime mount option on the cache drives.

Regards
Henrik



THanks much for the quick response, Henrik.
Filesystem for cache disks currently configured for reiserfs with
notail/noatime opts.
I did not have the fd amounts set, nor ip_local_port_range.
My cache_dirs have each disk mounted as partition, i.e. disk1=/squid1
disk2=/squid2; would your suggestion
be then to halve each disk and partition each as cache_dir? (i.e, go
from squid1-4 to squid1-8 across the 4 disks)
Also have a 5th disk of equal size that has to be used for OS, just fyi
- so these 4 are totally dedicated to Squid.
thanks again, appreciate your input.
-Ryan





Re: [Fwd: Re: [squid-users] Recommended cache_dir config for large system]

2008-07-16 Thread Rhino

Richard Hubbell wrote:

THanks much for the quick response, Henrik.
Filesystem for cache disks currently configured for
reiserfs with
notail/noatime opts.
I did not have the fd amounts set, nor ip_local_port_range.
My cache_dirs have each disk mounted as partition, i.e.
disk1=/squid1
disk2=/squid2; would your suggestion
be then to halve each disk and partition each as cache_dir?
(i.e, go
from squid1-4 to squid1-8 across the 4 disks)
Also have a 5th disk of equal size that has to be used for
OS, just fyi
- so these 4 are totally dedicated to Squid.
thanks again, appreciate your input.
-Ryan


Just curious why reiserfs?  I don't think it's supported any longer.



  



size/speed considerations when we set the system up originally.  It's 
worked well so far.

cheers


Re: [squid-users] squid 2.6 with wccpv2 error ... router id !

2008-07-17 Thread Rhino

Alexandre Correa wrote:

Hello,

i´m having problems to setup wccp with squid and freebsd,

my setup:

router:
!
!
ip wccp web-cache
interface Loopback0
 ip address 10.254.254.2 255.255.255.255
!
interface FastEthernet0/0/0
  description *** lan to clients ***
  ip address 189.x.x.1 255.255.255.0
  ip wccp web-cache redirect in
..
..


squid.conf
http_port 3128 transparent

wccp2_router 10.254.254.2
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_service standard 0


freebsd:
bge0: 189.x.x.3
ifconfig gre0 create inet 189.x.x.3 10.254.254.1 netmask
255.255.255.255 link2 tunnel 189.x.x.3 10.254.254.2 up

ipfw list:
01000 fwd 127.0.0.1,3128 tcp from any to any dst-port 80 recv gre0
65535 allow ip from any to any


#sh ip wccp
Global WCCP information:
Router information:
Router Identifier:   10.254.254.2
Protocol Version:2.0

Service Identifier: web-cache
Number of Cache Engines: 0
Number of routers:   0
Total Packets Redirected:0
Redirect access-list:-none-
Total Packets Denied Redirect:   0
Total Packets Unassigned:0
Group access-list:   -none-
Total Messages Denied to Group:  0
Total Authentication failures:   0


#sh ip wccp web-cache detail
WCCP Cache-Engine information:
Web Cache ID:  10.254.254.1
Protocol Version:  2.0
State: NOT Usable
Initial Hash Info: 
   
Assigned Hash Info:
   
Hash Allotment:0 (0.00%)
Packets Redirected:0
Connect Time:  00:00:08


ifconfig gre0
gre0: flags=d051 mtu 1476
tunnel inet 189.x.x.3 --> 10.254.254.2
inet 189.x.x.3 --> 10.254.254.1 netmask 0x



someone can say where i´m mistaking ?!

thanks !!!

regards,


Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net



shouldn't your squid config point to assignment method as well?
also maybe declare wccp2_address

-Ryan


Re: [squid-users] Squid in the Enterpise

2008-07-17 Thread Rhino

Leonardo Rodrigues Magalhães wrote:



Robert V. Coward escreveu:
I am running into the standard "Open Source" fear at my local site. 
Can anyone name some major companies that use Squid. We are talking 
enterprise or ISP here. We currently have about 100,000 users with 
heavy streaming video use. Some of the management are afraid Squid 
will not be able to handle the load.
Our planned deployment box is a 8-way, 16GB ram, 1TB (6 disks I think) 
server which will be running RedHat Enterprise Linux.


  


   in my opinion 100k users are just too much to a single machine, even 
if it's a 'super' machine. And let's not think about machine load ... 
let's think on a machine crash of failure of some kind. 100k users are 
enough users for you to start thinking on some clustering of some kind.


   i agree with Richard Hubbell . 100k users are just enough for you 
to look for some expert to analyze and build this project for you.


   we're not talking of 100 or 1k users ... we're talking of 100k. 100k 
users on a standard (not optimized) device/system configuration will 
probably trash any cache solution and squid wont be an exception.




besides the items previously addressed (and should we mention many of 
the "commercial" caches use open solutions?),
you should bear in mind that for a cache to be truly effective at 
bandwidth conservation (if that is your goal) it
needs to be placed close to the users.  So if you're talking about an 
ISP with 100k users, I doubt they all reside
on one or two LANs - and you'd do well to establish a topology with 
several caches servicing their own groups
of users.  What you'll save in having to add additional bandwidth 
overall would surely recoup the costs of the

additional hardware, imho.
hth
-Ryan


Re: [squid-users] Squid and WCCP hardware placement

2008-10-16 Thread Rhino

B.
cheers
-Ryan


Johnson, S wrote:

I'm working on getting this working but I'm unclear on the hardware placement 
for each of the devices.

Is it:

A)
Workstation->Cisco->Squid-->internet
(WCCP)(NAT)

B)
Workstation->Cisco (WCCP)
|
   >Squid--->internet
(NAT)

C)
Workstation->Cisco->Internet
|(WCCP)
   >Squid

D) or???

Thanks a bunch.