Re: [squid-users] http CONNECT method with fwd proxy to content server on same subnet

2010-05-17 Thread Quin Guin
Amos,

  Thank you for your reply and see reply below:

Thanks,

Guin



- Original Message 
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Sent: Sat, May 15, 2010 2:14:14 AM
Subject: Re: [squid-users] http CONNECT method with fwd proxy to content server 
on same subnet

Quin Guin wrote:
 Hi,
 
 I have a new need for deploying squid in my environment and I have
 been trying to set it up but it is not working as expected. Please
 see me requirements below and I have tried this with both 2.7-stable9
 and 3.1.3 on CentOS4.6 64bit.
 
 I have a remote server sending a HTTP CONNECT to my server but my
 server can't handle an HTTP CONNECT. So I wanted to use squid to

Something is badly broken there. CONNECT is not a generic HTTP request method. 
It is specifically for browser-to-proxy and proxy-to-proxy communication.
You should never receive it at a web server or web app interface.
 

 I agree this is not a good design but I didn't have a say in it but I did get 
stuck with getting it to work. The request are coming from browser-to-proxy 
over 8080 and my idea is to proxy(squid)-to-proxy(ours) that doesn't handle 
CONNECT method. Yes I know this is far from ideal but I am just trying to have 
SQUID as a forward proxy receive the request then send it as a regular https 
request still encrypted with out the CONNECT method to our proxy.

 handle the CONNECT method and then send the https requests to my
 local server to handle the request. I know that a transparent proxy
 doesn't know how to handle the SSL requests because is not operating

Yes, nor does it legally handle CONNECT method. Since interception mode should 
only be handling valid web server interface methods.

I agree with that..

 as a normal proxy. So I have been using squid as a fwd proxy but it
 keeps sending the http CONNECT method to my end server which is
 causing issues. So I am asking for ideas on what I need to do to look
 at do this. I have tried various iptables rules and cache_peers but
 nothing is seeming to work I am using pretty much the default config
 except for my local network IPs and ACL to allow the traffic.
 
 I would appreciate any ideas..

Do you have access or control to configure the remote server properly?

No I do not but I really wish I did..because then I would not be doing this.

What is your current squid.conf configuration for http_port, http_access and 
cache_peer rules?


Here is the config:

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl SOL src xxx.xxx.xxx.xxx/24 # novarra


http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow CONNECT SOL
http_access allow CONNECT localnet
http_access allow SOL
http_access deny all
icp_access allow localnet
icp_access deny all

http_port 8080
htcp_port 0
icp_port 0

never_direct allow all
cache_peer 172.18.0.39 parent 8775 0 no-query default
cache_peer_access 172.18.0.39 allow CONNECT



Amos
-- Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3



  


Re: [squid-users] http CONNECT method with fwd proxy to content server on same subnet

2010-05-17 Thread Quin Guin
Thank you Henrik,

 Yes I agree as I stated in a reply to Amos this is not an ideal or a good 
design but I need to make it work.

I do have SQUID configured as forward proxy but I think I need to setup some 
routing policy (iptables) to make everything go directly through our servers as 
they are acting like a proxy but not a caching proxy and can not handle CONNECT 
method.

Any ideas would be greatly appreciated and I have looked and tried the Config 
example in the FAQ  WIki on squid-cache.org.


best regards,

Guin





 



- Original Message 
From: Henrik Nordström hen...@henriknordstrom.net
To: Quin Guin quing...@yahoo.com
Cc: squid-users@squid-cache.org
Sent: Sat, May 15, 2010 3:17:57 AM
Subject: Re: [squid-users] http CONNECT method with fwd proxy to content server 
on same subnet

fre 2010-05-14 klockan 07:17 -0700 skrev Quin Guin:

 I have a remote server sending a HTTP CONNECT to my server but my
 server can't handle an HTTP CONNECT. So I wanted to use squid to
 handle the CONNECT method and then send the https requests to my local
 server to handle the request. I know that a transparent proxy doesn't
 know how to handle the SSL requests because is not operating as a
 normal proxy. So I have been using squid as a fwd proxy but it keeps
 sending the http CONNECT method to my end server which is causing
 issues. So I am asking for ideas on what I need to do to look at do
 this. I have tried various iptables rules and cache_peers but nothing
 is seeming to work I am using pretty much the default config except
 for my local network IPs and ACL to allow the traffic.

You should not require anything special. Just Squid configured as a
plain proxy and allowing this remote server to access it.

Note that you SHOULD NOT configure Squid as a reverse proxy. CONNECT is
a proxy method.

But as amos mentioned, why is that remote server sending your CONNECT
requests in the first place? Probably better to address the problem
there.

Regards
Henrik





[squid-users] http CONNECT method with fwd proxy to content server on same subnet

2010-05-14 Thread Quin Guin
Hi,

 I have a new need for deploying squid in my environment and I have been trying 
to set it up but it is not working as expected. Please see me requirements 
below and I have tried this with both 2.7-stable9 and 3.1.3 on CentOS4.6 64bit.
 
I have a remote server sending a HTTP CONNECT to my server but my server can't 
handle an HTTP CONNECT. So I wanted to use squid to handle the CONNECT method 
and then send the https requests to my local server to handle the request. I 
know that a transparent proxy doesn't know how to handle the SSL requests 
because is not operating as a normal proxy. So I have been using squid as a fwd 
proxy but it keeps sending the http CONNECT method to my end server which is 
causing issues. So I am asking for ideas on what I need to do to look at do 
this. I have tried various iptables rules and cache_peers but nothing is 
seeming to work I am using pretty much the default config except for my local 
network IPs and ACL to allow the traffic.

I would appreciate any ideas..

Thanks,

Guin



  


[squid-users] coredumps on 2.7

2009-11-26 Thread Quin Guin
Hi,

 I am running 2.7-STABALE6 on many squid servers and just recently in the last 
few days I am seeing a lot of coredumps. I have most of the coredumps still and 
I would like to understand what happened? 

 I did search the mailing list and I used gdb to generate a stack trace but it 
didn't give ME a lot of useful information.

[xxx...@cach2 cache]# gdb squid core.31033
GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux-gnu...squid: No such file or 
directory.

Core was generated by `(squid)'.
Program terminated with signal 6, Aborted.
#0  0x0054a7a2 in ?? ()
(gdb) where
#0  0x0054a7a2 in ?? ()
#1  0x0058f7a5 in ?? ()
#2  0x in ?? ()
(gdb) 



 So I was wondering if someone could point me to where I can find more 
information on interpreting the coredumps.

  I did get a lot of coredumps when I enabled Digest on a 3 node set of Squids. 
I have since disable the peers until I can determine if that is the cause on 
that set of servers.
 
 I would appreciate all feedback.

Best regards,

Quin



  


Re: [squid-users] negative_ttl

2009-09-22 Thread Quin Guin
Thank you for the quick reply to my question and I have responded to some of 
your questions below. I also have a question about the acl that Chris sent.

This use http_status which is a 3.x feature and not a 2.7 so I am trying to use 
the rep_header and I am not having any luck with it. I have examples below and 
if someone could shed some light on how I can use the status values 50x that 
would be great. 

 acl HTTPStatus503 http_status 503
 cache deny HTTPStatus503
 
acl HTTPStatus503 rep_header Status -i 503
cache deny HTTPStatus503
or
acl HTTPStatus503 rep_header status -i 503
cache deny HTTPStatus503


I have tried both and I have enabled debug and they are not getting hit. So I 
have tethereal traces to verify the status values contain 503 adn they do. I am 
using a very simple perl script to generate the 503 for testing and I am only 
using Cache-control: max-age=240 so its short lived.

HTTP/1.1 503 Service Temporarily Unavailable 
Date: Tue, 22 Sep 2009 12:45:43 GMT 
Server: Apache/2.2.3 (Red Hat) 
Expires: Tue, 22 Sep 2009 07:49:43 CDT 
Cache-control: max-age=240 
Connection: close 
Content-Type: text/html; charset=ISO-8859-1

Please see below for answers to other questions..



- Original Message 
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Sent: Monday, September 21, 2009 10:18:48 PM
Subject: Re: [squid-users] negative_ttl

On Mon, 21 Sep 2009 17:12:44 -0800, Chris Robertson crobert...@gci.net
wrote:
 Quin Guin wrote:
 Hi, 

  I am seeing a behavior with the negative_ttl option and I would like to
  get confirmation on its behavior.


  I am using 2.7.Stable6 

 I am having an issue with a content provider that is setting the
 max_age=604800 on 503 error pages and so their 503 error pages are
 getting cached for the length expire time.
 
 If it's just 503's you are having trouble with...
 
 acl HTTPStatus503 http_status 503
 cache deny HTTPStatus503
 
 ...will deny caching of any response with a 503 code.  Fine tune it with 
 an additional dstdomain acl as needed.
 
  I know that the content provider should correct this and I have
  communicated that to them several times but it gets fixed and then it
  gets set again..ugh!! So everyone saying SQUID has a bug or broke..

Set again? (a) you mean the provider is undoing their max-age fix?  or (b)
that the pages coming out of squid have it set that way despite the
provider being correctly set at the time?

Ans.. Yes provider is undoing their fixes.. I have not seen the (b) option 
happen.

(b) is a Squid problem, probably resolved by purging the relevant URLs from
cache after the provider fix happens. 2.7 does not contain bug #7 so should
self-correct when that week is over.

Ans.. Purging does resolve it..yes it does self correct per the age value.

(a) does seem to be a issue somewhere between the provider web Server and
Squid. It may be the provider themselves, or a cache between you two.

  Ans.. it is between the provider and SQUID then squid reply with what was 
given but I do think they are using a reverse proxy because I have seen errors 
come it.



/personal opinion::
Specifying that temporary (possibly from only a single request) network
failures should be reported to all visitors for a week after they occur is
very excessive.  IMHO the caching timeouts of 5xx should be in the order of
minutes, 4xx possibly hours. Not days or weeks for either.

Ans.. I agree with you 100%.


 I have set the negative_ttl 0 in hopes that the negatively cached
pages
 doesn't get cached at all not even for the default 5 min. This works for
 pages that don't have max_age values or very low ones.. I just want to
 confirm that this is the expected behavior for negative_ttl.

This will not impact on your problem, but

... you should have that anyway.  Setting it to zero disables Squids forced
minimum caching time, leaving squid to follow the correct RFC-compliant
behavior. Which is defined by the 4xx/5xx reply Expires: and Cache-Control:
headers received, or to discard immediately if they send none.


I thought 2.7 had the correct max-age handling. I suspect there may be
another header or CC: value sent which impacts on the caching. 2.7.STABLE7
has a fix for re-prioritizing the stale-* CC: value, and Expires: header
being present has priority over max-age.

Chris Robertsons solution will get you around the problem providers
headers.


  If so I think my next course of action in the 2.7 build line is to use
  and acl with deny on http status values? If anyone has done this and
  would like to share what they did or can point me to some docs or
  something similar I would appreciate that.


 I know 3.1 have the ability to do what I need but I am not ready to roll
 that out to production yet.

 Thanks,

 Quinguin
  
 
 Chris

Amos



  


[squid-users] negative_ttl

2009-09-21 Thread Quin Guin
Hi, 

 I am seeing a behavior with the negative_ttl option and I would like to get 
confirmation on its behavior.


 I am using 2.7.Stable6 

I am having an issue with a content provider that is setting the max_age=604800 
on 503 error pages and so their 503 error pages are getting cached for the 
length expire time. I know that the content provider should correct this and I 
have communicated that to them several times but it gets fixed and then it gets 
set again..ugh!! So everyone saying SQUID has a bug or broke..

I have set the negative_ttl 0 in hopes that the negatively cached pages 
doesn't get cached at all not even for the default 5 min. This works for pages 
that don't have max_age values or very low ones.. I just want to confirm that 
this is the expected behavior for negative_ttl. 

 If so I think my next course of action in the 2.7 build line is to use and acl 
with deny on http status values? If anyone has done this and would like to 
share what they did or can point me to some docs or something similar I would 
appreciate that.


I know 3.1 have the ability to do what I need but I am not ready to roll that 
out to production yet.

Thanks,

Quinguin


  


Re: [squid-users] squid becomes very slow during peak hours

2009-07-01 Thread Quin Guin

Hi,

  I am running both 2.7-STABLE6  3.1.0.8 versions on
more then a few servers. On avg I am pushing about 230+ TPS and at Peak
usage I don't see delays from SQUID unless you come across a content
server that is having issues or isn't cache freindly, DNS issues will
cause problems... but I can't stress enough how fast you should ditch
RAID all together and go with a JBOD and you will see a big improvement
now matter what OS you are running.


Quin


- Original Message 
From: goody goody think...@yahoo.com
To: squid-users@squid-cache.org
Cc: Chris Robertson crobert...@gci.net; balique8...@yahoo.com; 
hen...@henriknordstrom.net; Amos jafferies Squid GURU squ...@treenet.co.nz
Sent: Wednesday, July 1, 2009 7:07:59 AM
Subject: Re: [squid-users] squid becomes very slow during peak hours


Thanks for replies,

1. i have tried squid 3.0 stable 14 for few weeks but the problems were there 
and performance issues was also severe. as we had previously 2.5 stable 10 
running that's why i reverted to it temporarily. further i have squid 3.0/14 in 
place as i have install 2.5 in separate directry and i can squid 3.0/14 run it 
anytime. i will also welcome if you tell me the most stable version of squid. 

2. secondly we are using RAID 5 and have very powerfull machine at present as 
compared to previous one, and previous was working good with the same amount of 
traffic and less powerfull system.

3. thirdly i have gigabit network card but yes i have 100 mb ethernet channel, 
but as defined in step 2 same link was working superb in previous setup.

4. i could not get chris robertson question regarding processors, i have two 
dual core xeon processors(3.2 ghz) and i captured stats at peak hours when 
performance was degraded.


So what should i do???

Regards,

--- On Wed, 7/1/09, Chris Robertson crobert...@gci.net wrote:

 From: Chris Robertson crobert...@gci.net
 Subject: Re: [squid-users] squid becomes very slow during peak hours
 To: squid-users@squid-cache.org
 Date: Wednesday, July 1, 2009, 2:25 AM
 goody goody wrote:
  Hi there,
 
  I am running squid 2.5 on freebsd 7,
 
 As Adrian said, upgrade.  2.6 (and 2.7) support kqueue
 under FreeBSD.
 
   and my squid box respond very slow during peak
 hours. my squid machine have twin dual core processors, 4
 ram and following hdds.
 
  Filesystem Size   
 Used   Avail Capacity  Mounted on
  /dev/da0s1a9.7G241M 
   8.7G 3%/
  devfs  1.0K 
   1.0K 
 0B   100%/dev
  /dev/da0s1f 73G 
35G 32G 
   52%/cache1
  /dev/da0s1g 73G   
 2.0G 65G 
3%/cache2
  /dev/da0s1e 39G   
 2.5G 33G 
7%/usr
  /dev/da0s1d 58G   
 6.4G 47G12% 
   /var
 
 
  below are the status and settings i have done. i need
 further guidance to  improve the box.
 
  last pid: 50046;  load averages: 
 1.02,  1.07,  1.02   


 up 
 
  7+20:35:29  15:21:42
  26 processes:  2 running, 24 sleeping
  CPU states: 25.4% user,  0.0% nice,  1.3%
 system,  0.8% interrupt, 72.5% idle
  Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache,
 112M Buf, 3708K Free
  Swap: 4096M Total, 20K Used, 4096M Free
 
PID USERNAME  THR
 PRI NICE   SIZERES STATE 
 C   TIME   WCPU COMMAND
  49819 sbt1 105   
 0   360M   351M
 CPU3   3  92:43 98.14% squid
487 root   
 1  960  4372K 
 2052K select 0  57:00  3.47% natd
646 root   
 1  960 16032K 12192K select
 3  54:28  0.00% snmpd

 SNIP
  pxy# iostat
tty 
da0 
   pass0 
cpu
   tin tout  KB/t tps 
 MB/s   KB/t tps  MB/s  us ni sy in
 id
 0  126
 12.79   5 
 0.06   0.00   0 
 0.00   4  0  1  0 95
 
  pxy# vmstat
   procs  memory   
   page 
   disks 
faults  cpu
   r b w avm   
 fre   flt  re  pi  po 
   fr  sr da0
 pa0   in   sy   cs
 us sy id
   1 3 0  458044 103268   
 12   0   0   0 
  
 30   5   0   0 
 273 1721 2553  4  1 95

 
 Those statistics show wildly different utilization. 
 The first (top, I 
 assume) shows 75% idle (or a whole CPU in use).  The
 next two show 95% 
 idle (in effect, one CPU 20% used).  How close (in
 time) were the 
 statistics gathered?
 
 
  some lines from squid.conf
  cache_mem 256 MB
  cache_replacement_policy heap LFUDA
  memory_replacement_policy heap GDSF
 
  cache_swap_low 80
  cache_swap_high 90
 
  cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
  cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64
 
  cache_log /var/log/squid25/cache.log
  cache_access_log /var/log/squid25/access.log
  cache_store_log none
 
  half_closed_clients off
  maximum_object_size 1024 KB 

  if anyother info required, i shall provide.

 
 The types (and number) of ACLs in use would be of interest
 as well.
 
  Regards,
  .Goody.

 
 Chris
 
 


  



[squid-users] 2.7.Stable6 httpReadReply: Excess data

2009-06-16 Thread Quin Guin

Hi,

 I am in the need of some assistance in looking into a high number of  
httpReadReply: Excess data  entries around 3000 cache.log entries per day per 
SQUID server. The  httpReadReply: Excess data from GET http://xx; this is 
happening on many sites from my reading it is in most cases it is an issue with 
the content site/server. I am a bit concerned that Is this a sign of of memory 
or disk issue because from the cache manager things look to be running well. I 
also do see some other errors and I have included the below with more 
information on my setup. The urlParse: Illegal character in hostname 
'www.google-%20analytics.com' is just annoying adn if anyone has away to fix 
it beside blocking it I would appreciate any ideas on that.

 I am starting to see latency and I have a 3 node cluster of SQUID servers 
setup as standard reverse proxies. 

Cache.log entries:

2009/06/16 21:21:43| httpReadReply: Excess data from GET 
http://www.myyearbook.com/apps/home;
2009/06/16 21:22:03| clientTryParseRequest: FD 14 (10.22.0.64:40881) Invalid 
Request
2009/06/16 21:22:04| clientTryParseRequest: FD 49 (10.22.0.63:40894) Invalid 
Request
2009/06/16 21:22:06| clientTryParseRequest: FD 36 (10.22.0.63:40938) Invalid 
Request
2009/06/16 21:22:21| clientTryParseRequest: FD 290 (10.22.0.65:41114) Invalid 
Request
2009/06/16 21:22:21| clientTryParseRequest: FD 415 (10.22.0.65:41124) Invalid 
Request
2009/06/16 21:22:22| clientTryParseRequest: FD 361 (10.22.0.63:41168) Invalid 
Request
2009/06/16 21:22:35| clientTryParseRequest: FD 109 (10.22.0.64:41418) Invalid 
Request
2009/06/16 21:22:35| clientTryParseRequest: FD 129 (10.22.0.63:41431) Invalid 
Request
2009/06/16 21:22:36| clientTryParseRequest: FD 477 (10.22.0.65:41458) Invalid 
Request
2009/06/16 21:22:50| clientTryParseRequest: FD 356 (10.22.0.63:41707) Invalid 
Request
2009/06/16 21:22:51| clientTryParseRequest: FD 180 (10.22.0.64:41719) Invalid 
Request
2009/06/16 21:22:51| clientTryParseRequest: FD 197 (10.22.0.63:41744) Invalid 
Request
2009/06/16 21:23:01| clientTryParseRequest: FD 49 (10.22.0.64:41875) Invalid 
Request
2009/06/16 21:23:01| clientTryParseRequest: FD 104 (10.22.0.63:41887) Invalid 
Request
2009/06/16 21:23:02| clientTryParseRequest: FD 399 (10.22.0.64:41921) Invalid 
Request
2009/06/16 21:23:03| httpReadReply: Excess data from GET 
http://www.myyearbook.com/apps/home;
2009/06/16 21:23:04| httpReadReply: Excess data from GET 
http://www.myyearbook.com/apps/home;
2009/06/16 21:23:21| clientTryParseRequest: FD 117 (10.22.0.63:42346) Invalid 
Request
2009/06/16 21:23:21| clientTryParseRequest: FD 457 (10.22.0.65:42394) Invalid 
Request
2009/06/16 21:23:22| clientTryParseRequest: FD 328 (10.22.0.63:42458) Invalid 
Request
2009/06/16 21:23:23| urlParse: Illegal character in hostname 
'www.google-%20analytics.com'
2009/06/16 21:23:25| httpReadReply: Excess data from GET 
http://sugg.search.yahoo.net/sg/?output=fxjsonpnresults=10command=horny%20granies;
2009/06/16 21:23:45| clientTryParseRequest: FD 544 (10.22.0.65:42839) Invalid 
Request
2009/06/16 21:23:46| clientTryParseRequest: FD 228 (10.22.0.64:42852) Invalid 
Request
2009/06/16 21:23:47| clientTryParseRequest: FD 54 (10.22.0.64:42874) Invalid 
Request
2009/06/16 21:23:49| urlParse: Illegal character in hostname 
'www.google-%20analytics.com'
2009/06/16 21:24:03| clientTryParseRequest: FD 35 (10.22.0.63:43094) Invalid 
Request

Squid Cache: Version 2.7.STABLE6-20090511
configure options:  '--prefix=/usr/local/squid-2.7.STABLE6-20090511' 
'--enable-epoll' '--with-pthreads' '--enable-snmp' 
'--enable-storeio=ufs,aufs,coss' '-with-large-files' 
'--enable-large-cache-files' '--enable-follow-x-forwarded-for' 
'--with-maxfd=16384' '--disable-dependency-tracking' '--disable-ident-lookups' 
'--enable-removal-policies=heap,lru' '--disable-wccp' 'CFLAGS=-fPIE -Os -g 
-pipe -fsigned-char -O2 -g -pipe -m64' 'LDFLAGS=-pie'


Connection information for squid:
Number of clients accessing cache:9
Number of HTTP requests received:431867579
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:0
Request failure ratio: 0.00
Average HTTP requests per minute since start:14133.0
Average ICP messages per minute since start:0.0
Select loop called: 1303826448 times, 1.406 ms avg
Cache information for squid:
Request Hit Ratios:5min: 59.2%, 60min: 60.8%
Byte Hit Ratios:5min: 65.7%, 60min: 65.5%
Request Memory Hit Ratios:5min: 27.0%, 60min: 26.9%
Request Disk Hit Ratios:5min: 61.1%, 60min: 61.4%
Storage Swap size:207997068 KB
Storage Mem size:262592 KB
Mean Object Size:19.94 KB
Requests given to unlinkd:0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.01745  0.01469
Cache Misses:  0.10281  0.10857
Cache Hits:0.0  0.0
Near Hits: 0.07825  0.08729
Not-Modified 

[squid-users] TCP_MISS/200 with squid-2.7.STABLE6 Reverse proxy config

2009-04-16 Thread Quin Guin

Hi,

 I have been using squid for many years as a forward proxy and now I need to 
setup a reverse. I have read and study many different email threads and FAQ on 
this topic but I can't seem to get past TCP_MISS/200s. Please see my most basic 
config below and I know there is a lot more that can be done to make it more 
secure but I am just trying to get a TCP_MISS/200 then a TCP_HIT!!! 

I am open to trying things and I tried installing 3.1 on RHELL4-U6 64 bit but 
it has its keeps giving this error: configure: error: pthread library required 
but cannot be found. I will work on that later.

http_port 81 accel defaultsite=f99.net 
cache_peer 10.20.20.39 parent 88 0 no-query originserver login=PASS 
name=dtvAccel 
##ACL# 
acl ALL dstdomain f99.net 
http_access allow ALL 
cache_peer_access dtvAccel allow All 
cache_peer_access dtvAccel deny all 
##Headers## 
via on 
header_access Via allow all 
header_access Age deny all 
header_access X-Cache deny all 
##Cache Config## 
collapsed_forwarding on 
minimum_expiry_time 120 seconds 
cache_mem 256 MB 
maximum_object_size 40960 KB 
maximum_object_size_in_memory 50 KB 
ipcache_size 40960 
# dc setting changed - orig first - new second 
# cache_dir aufs /usr/local/squid-2.7/var/cache 5 16 256 
cache_dir ufs /usr/local/squid/var/cache 5000 16 256 
access_log /usr/local/squid/var/logs/access.log squid 
cache_store_log /usr/local/squid/var/logs/squid-store.log 
#refresh_pattern ^ftp:   144020% 10080 
#refresh_pattern ^gopher:14400%  1440 
#refresh_pattern (/cgi-bin/|\?)  0   20% 720 
refresh_pattern -i \.jpg$ 10 90% 10 override-expire override-lastmod 
ignore-reload reload-into-ims 
refresh_pattern -i \.jpeg$ 10 90% 10 override-expire override-lastmod 
ignore-reload reload-into-ims 
refresh_pattern -i \.gif$ 10 90% 10 override-expire override-lastmod 
ignore-reload reload-into-ims 
refresh_pattern -i \.png$ 10 90% 10 override-expire override-lastmod 
ignore-reload reload-into-ims 
refresh_pattern -i \.swf$ 10 90% 10 override-expire override-lastmod 
ignore-reload reload-into-ims 
refresh_pattern -i \.flv$ 10 90% 10 override-expire override-lastmod 
ignore-reload reload-into-ims 
refresh_pattern -i \.js$ 2 90% 2 override-expire override-lastmod ignore-reload 
reload-into-ims 
refresh_pattern -i \.css$ 2 90% 2 override-expire override-lastmod 
ignore-reload reload-into-ims 
refresh_pattern -i \.htm$10   90% 10 
refresh_pattern -i \.html$   10   90% 10 
#icp_access allow all 
cache_mgr quing...@yahoo.com 
visible_hostname diuqs 
logfile_rotate 12 
coredump_dir /usr/local/squid/var/cache


Thank you very much,

Quin



  


Re: [squid-users] Squid Scalability

2009-04-06 Thread Quin Guin

Hi,

Here are the results for 2 of our squid servers with the highiest use. One is 
2.6 and the other is 2.7 they all use AUFS with JBOD ext2, rw,notatime. I will 
upgrade the 2.6 to 2.7 this week so we can see the change.

Version 2.7.STABLE6   
Quad-Core
CPU Intel(R) Xeon(R) CPU L5420 @ 2.50GHz
RAM 8 GB
HDD 3x SAS,Fujitsu,147Gb,15K
OS RHEL4 AS U7 64bit – 2.6.9-78.0.13.ELsmp
Users 57
RPS 166.95
Request Hit Ratio 51.7%, 51.3%
CPU Usage:7.18%
CPU Usage, 5 minute avg:4.33%
CPU Usage, 60 minute avg:3.97%



Version 2.6.STABLE21   
Quad-Core
CPU Intel(R) Xeon(R) CPU L5420  @ 2.50GHz
RAM 8 GB
HDD 3x SATA,147Gb,7200K
OS RHEL4 AS U6 64bit – 2.6.9-67.ELsmp
Users 15
RPS 262.3
Request Hit Ratio 74.2%,  73.7%
CPU Usage:7.90%
CPU Usage, 5 minute avg:10.45%
CPU Usage, 60 minute avg:10.21%


Quin

--- On Mon, 4/6/09, Amos Jeffries squ...@treenet.co.nz wrote:

 From: Amos Jeffries squ...@treenet.co.nz
 Subject: Re: [squid-users] Squid Scalability
 To: Gavin McCullagh gavin.mccull...@gcd.ie
 Cc: squid-users@squid-cache.org
 Date: Monday, April 6, 2009, 9:04 AM
 Gavin McCullagh wrote:
  Hi,
  
  On Sat, 04 Apr 2009, Amos Jeffries wrote:
  
  For now what we need are the hit/miss ratios and
 user numbers from Squid  under peak load, and a few
 other details to guide comparisons.
  
    http://wiki.squid-cache.org/KnowledgeBase/Benchmarks
  details what we are looking for right now and
 where to locate it.
  
  Here's our current situation:
  
 
 
  Version: 2.6.STABLE18 (Ubuntu Hardy Package)
  OS: 32-Bit Ubuntu GNU/Linux (Hardy)
  CPU: Dual Core Intel(R) Xeon(R) CPU  3050  @
 2.13GHz
  RAM: 8GB
  HDD: 2x SATA disks (150GB, 1TB)
  Cache: 1x 600GB
  Users: ~3000
  RPS: 130
  Hit Ratio: 35-40%
  Byte Hit Ratio: ~13%
  
  Submitted by: Gavin McCullagh, Griffith College
 Dublin
  With this hit ratio and cache size, substantial cpu
 time is spent in iowait
  as the disk is overloaded.  Reducing the cache to
 450GB relieves this, but
  the hit rate drops to more like 10-11%.
 
 
  
  I'm going to put a second 1TB disk in to replace the
 130GB and have a
  second large cache_dir so this should improve.
  
  Gavin
  
 
 Thank you. Added.
 What sort of CPU load does it run under?
 And being linux is it running AUFS cache_dir?
 
 Amos
 -- Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
   Current Beta Squid 3.1.0.6
 






Re: [squid-users] Squid Scalability

2009-04-03 Thread Quin Guin

Hi Amos,

  I am willing to supply benchmarking data for 6 different deployments 
configured as forward proxies on a regular basis. Where should I submit the 
records and currently we are using 2.7, 2.6 and I should be able to get some 
3.x data as well? 


Thanks,

Quin

--- On Fri, 4/3/09, Amos Jeffries squ...@treenet.co.nz wrote:

 From: Amos Jeffries squ...@treenet.co.nz
 Subject: Re: [squid-users] Squid Scalability
 To: Sunny Bhatheja opensource.linu...@gmail.com
 Cc: squid-users@squid-cache.org
 Date: Friday, April 3, 2009, 10:06 AM
 Sunny Bhatheja wrote:
  Hi,
       I have the following configuration
 of my Hardware. So can any one suggest me that how much I
 can scale my Squid in terms of users.
  
  1)       Sun Fire system
 x4450
  
  2)       Quad Cord
  
  3)       64 GB RAM
  
  4)       146x4 GB HDD
  
  I am using squid 2.6 STABLE4 that is bundled with RHEL
 5.2
 
 Despite many years of asking, few people have ever supplied
 the squid project with relevant benchmarking info. We depend
 on volunteers so there are no hard numbers available
 publicly yet.
 
 req/sec scales into thousands on modern hardware. It
 depends on what modes you run squid as (forward/reverse have
 vastly different maximums), how high the hit-ratios are and
 how many req/sec each user makes.
 
 To scale higher you will need a newer Squid than
 2.6.stable6.
 
 Amos
 -- Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
   Current Beta Squid 3.1.0.6
 






Re: [squid-users] Squid Scalability

2009-04-03 Thread Quin Guin

Should I just put the records on wiki = 
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks?highlight=(benchmarking) 
or is there a better place for this information?

I can provide the data today once I know where the best places to post it is?



--- On Fri, 4/3/09, Amos Jeffries squ...@treenet.co.nz wrote:

 From: Amos Jeffries squ...@treenet.co.nz
 Subject: Re: [squid-users] Squid Scalability
 To: Quin Guin quing...@yahoo.com
 Cc: squid-users@squid-cache.org
 Date: Friday, April 3, 2009, 12:42 PM
 Quin Guin wrote:
  Hi Amos,
  
    I am willing to supply benchmarking
 data for 6 different deployments configured as forward
 proxies on a regular basis. Where should I submit the
 records and currently we are using 2.7, 2.6 and I should be
 able to get some 3.x data as well? 
  
  
  Thanks,
  
  Quin
 
 Excellent thank you.
 
 Email the info to squid-...@squdi-cache.org
 mailing list please.
 
 Amos
 
  
  --- On Fri, 4/3/09, Amos Jeffries squ...@treenet.co.nz
 wrote:
  
  From: Amos Jeffries squ...@treenet.co.nz
  Subject: Re: [squid-users] Squid Scalability
  To: Sunny Bhatheja opensource.linu...@gmail.com
  Cc: squid-users@squid-cache.org
  Date: Friday, April 3, 2009, 10:06 AM
  Sunny Bhatheja wrote:
  Hi,
        I have the
 following configuration
  of my Hardware. So can any one suggest me that how
 much I
  can scale my Squid in terms of users.
  1)       Sun Fire
 system
  x4450
  2)       Quad Cord
 
  3)       64 GB RAM
 
  4)       146x4 GB
 HDD
 
  I am using squid 2.6 STABLE4 that is bundled
 with RHEL
  5.2
 
  Despite many years of asking, few people have ever
 supplied
  the squid project with relevant benchmarking info.
 We depend
  on volunteers so there are no hard numbers
 available
  publicly yet.
 
  req/sec scales into thousands on modern hardware.
 It
  depends on what modes you run squid as
 (forward/reverse have
  vastly different maximums), how high the
 hit-ratios are and
  how many req/sec each user makes.
 
  To scale higher you will need a newer Squid than
  2.6.stable6.
 
 
 
 -- 
 Please be using
    Current Stable Squid 2.7.STABLE6 or
 3.0.STABLE13
    Current Beta Squid 3.1.0.6
 





Re: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0

2009-02-10 Thread Quin Guin

Hi Amos,

I just changed the symbolic link to 2.6 and used the cache dir for 2.6.
I still have 2.7 compiled and installed on the servers and I am running
2 servers with 2.7 and 2.6 at this deployment.

I will to install 3.1 on one server and see how stable it is because
that is major concern for me. I will share data if you would like
because I can put a fare amount of real load on this build.

Thanks

Q


- Original Message 
From: Amos Jeffries squ...@treenet.co.nz
To: Quin Guin quing...@yahoo.com
Cc: squid-users@squid-cache.org
Sent: Monday, February 9, 2009 7:19:30 PM
Subject: Re: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0


 Hi,

   I new to the squd-users list and I apologize if I am not posting
 correctly but I have been using squid for many years and this is my
 first post. I have read through the FAQs/Wiki and bugzilla database
 so see if this is a known issue or a configuration issue on my part but
 I am not finding anything relevant to Median Service Time for DNS
 Lookups always being Zero. So I switched back to the 2.6-STABLE22
 build line and it works as I expected.

Do you mean you went back and re-installed 2.6?
or you changed from using some non-working build options, to using the old
working configure options but still with 2.7?



 Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.03241  0.03427
 Cache Misses:  0.12106  0.12106
 Cache Hits:0.00091  0.00091
 Near Hits: 0.07409  0.07825
 Not-Modified Replies:  0.00091  0.00091
 DNS Lookups:   0.00094  0.00094
 ICP Queries:   0.0  0.0

 I
 would migrate from 2.6 to 3.0 build line but follow_x_forwarded is
 required for our installatons. I would appreciate any pointers or
 advice on what the issue is or if I am posting this issue to the wrong
 list.

You may also want to give 3.1 a test instead of 3.0 and see if it meets
your needs. The XFF stuff has been ported there.

Amos



 Thank you in advance,

 Q



 - Original Message 
 From: Quin Guin quing...@yahoo.com
 To: squid-users@squid-cache.org
 Sent: Sunday, February 8, 2009 7:54:05 PM
 Subject: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0


 Hello,

   I am currently in the process of moving from 2.6 to 2.7
 and I am seeing an issue on 2 of the servers that I just installed
 2.7-STABLE6 on. The dns.median_svc_time = 0.00 seconds is always 0
 now matter and squid is processing request just fine.

 I an
 running Linux 2.6.9 kernel and did not have this issue on 2.6-STABLE22
 and I am using squids internal DNS with out any issues. I just want to
 make sure that I don't have any issue before rolling out 2.7 to the
 rest of my squid servers.

 Here is an example from one of the 2.7-STABLE6 servers:

 Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.03066  0.03241
 Cache Misses:  0.10857  0.10857
 Cache Hits:0.0  0.0
 Near Hits: 0.06286  0.06286
 Not-Modified Replies:  0.0  0.0
 DNS Lookups:   0.0  0.0
 ICP Queries:   0.0  0.0


 Regards,

 Q







  



[squid-users] squid-3.1.0.5-20090210 errors on make check!

2009-02-10 Thread Quin Guin

Hi, 

 I am trying to compile squid-3.1.0.5-20090210 on Linux cache2 
2.6.9-78.0.8.ELsmp #1 SMP Wed Nov 5 07:14:58 EST 2008 x86_64 x86_64 x86_64 
GNU/Linux which is Red Hat Enterprise Linux AS release 4 (Nahant Update 6). 
The server has Dual Xeon Quad Core (yes I know its overkill) with 8G RAM.

When I do a # make check I am getting these errors and I did a search on these 
messages and I any info on a solution. I can provide additional information if 
needed.

tests/testArray.h:13: error: `CPPUNIT_NS' has not been declared
tests/testArray.h:14: error: expected class-name before '{' token
tests/testArray.h:15: error: ISO C++ forbids declaration of 
`CPPUNIT_TEST_SUITE' with no type
tests/testArray.h:16: error: `all' has not been declared
tests/testArray.h:16: error: ISO C++ forbids declaration of `CPPUNIT_TEST' with 
no type
tests/testArray.h:16: error: ISO C++ forbids declaration of `parameter' with no 
type
tests/testArray.h:17: error: ISO C++ forbids declaration of 
`CPPUNIT_TEST_SUITE_END' with no type
tests/testArray.cc:10: error: expected constructor, destructor, or type 
conversion before ';' token
tests/testArray.cc: In member function `void testArray::all()':
tests/testArray.cc:14: error: `CPPUNIT_ASSERT' was not declared in this scope
make[3]: *** [testArray.o] Error 1
make[3]: Leaving directory `/usr/local/src/dist/squid-3.1.0.5-20090210/lib'
make[2]: *** [check-am] Error 2
make[2]: Leaving directory `/usr/local/src/dist/squid-3.1.0.5-20090210/lib'
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/usr/local/src/dist/squid-3.1.0.5-20090210/lib'
make: *** [check-recursive] Error 1


Thanks,

Q



  



Re: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0

2009-02-10 Thread Quin Guin

Here are the results of squid -v

cache1:
Squid Cache: Version 2.7.STABLE6-20090207
configure options:  '--prefix=/usr/local/squid-2.7.STABLE6' '--enable-epoll' 
'--enable-snmp' '--enable-storeio=ufs,aufs,coss' '-with-large-files' 
'--enable-large-cache-files' '--enable-follow-x-forwarded-for' 
'--with-maxfd=16384' '--enable-removal-policies=heap,lru'

cache2:
Squid Cache: Version 2.7.STABLE5-20081230
configure options:  '--prefix=/usr/local/squid-2.7.STABLE5' '--enable-epoll' 
'--enable-snmp' '--enable-storeio=ufs,aufs,coss' '-with-large-files' 
'--enable-large-cache-files' '--enable-follow-x-forwarded-for' 
'--with-maxfd=16384' '--enable-removal-policies=heap,lru'

Quin

 

- Original Message 
From: Amos Jeffries squ...@treenet.co.nz
To: Quin Guin quing...@yahoo.com
Cc: squid-users@squid-cache.org
Sent: Tuesday, February 10, 2009 2:43:42 PM
Subject: Re: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0

Quin Guin wrote:
 Hi Amos,
 
 I just changed the symbolic link to 2.6 and used the cache dir for 2.6.
 I still have 2.7 compiled and installed on the servers and I am running
 2 servers with 2.7 and 2.6 at this deployment.
 
 I will to install 3.1 on one server and see how stable it is because
 that is major concern for me. I will share data if you would like
 because I can put a fare amount of real load on this build.
 

Can you share the output of  squid -v for each of the working and 
non-working binaries please?

Amos

 Thanks
 
 Q
 
 
 - Original Message 
 From: Amos Jeffries squ...@treenet.co.nz
 To: Quin Guin quing...@yahoo.com
 Cc: squid-users@squid-cache.org
 Sent: Monday, February 9, 2009 7:19:30 PM
 Subject: Re: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0
 
 Hi,

   I new to the squd-users list and I apologize if I am not posting
 correctly but I have been using squid for many years and this is my
 first post. I have read through the FAQs/Wiki and bugzilla database
 so see if this is a known issue or a configuration issue on my part but
 I am not finding anything relevant to Median Service Time for DNS
 Lookups always being Zero. So I switched back to the 2.6-STABLE22
 build line and it works as I expected.
 
 Do you mean you went back and re-installed 2.6?
 or you changed from using some non-working build options, to using the old
 working configure options but still with 2.7?
 
 
 Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.03241  0.03427
 Cache Misses:  0.12106  0.12106
 Cache Hits:0.00091  0.00091
 Near Hits: 0.07409  0.07825
 Not-Modified Replies:  0.00091  0.00091
 DNS Lookups:   0.00094  0.00094
 ICP Queries:   0.0  0.0

 I
 would migrate from 2.6 to 3.0 build line but follow_x_forwarded is
 required for our installatons. I would appreciate any pointers or
 advice on what the issue is or if I am posting this issue to the wrong
 list.
 
 You may also want to give 3.1 a test instead of 3.0 and see if it meets
 your needs. The XFF stuff has been ported there.
 
 Amos
 

 Thank you in advance,

 Q



 - Original Message 
 From: Quin Guin quing...@yahoo.com
 To: squid-users@squid-cache.org
 Sent: Sunday, February 8, 2009 7:54:05 PM
 Subject: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0


 Hello,

   I am currently in the process of moving from 2.6 to 2.7
 and I am seeing an issue on 2 of the servers that I just installed
 2.7-STABLE6 on. The dns.median_svc_time = 0.00 seconds is always 0
 now matter and squid is processing request just fine.

 I an
 running Linux 2.6.9 kernel and did not have this issue on 2.6-STABLE22
 and I am using squids internal DNS with out any issues. I just want to
 make sure that I don't have any issue before rolling out 2.7 to the
 rest of my squid servers.

 Here is an example from one of the 2.7-STABLE6 servers:

 Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.03066  0.03241
 Cache Misses:  0.10857  0.10857
 Cache Hits:0.0  0.0
 Near Hits: 0.06286  0.06286
 Not-Modified Replies:  0.0  0.0
 DNS Lookups:   0.0  0.0
 ICP Queries:   0.0  0.0


 Regards,

 Q





 
 
  
 


-- 
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
   Current Beta Squid 3.1.0.5



  



Re: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0

2009-02-09 Thread Quin Guin

Hi,

  I new to the squd-users list and I apologize if I am not posting correctly 
but I have been using squid for many years and this is my first post. I have 
read through the FAQs/Wiki and bugzilla database
so see if this is a known issue or a configuration issue on my part but
I am not finding anything relevant to Median Service Time for DNS
Lookups always being Zero. So I switched back to the 2.6-STABLE22
build line and it works as I expected.

Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.03241  0.03427
Cache Misses:  0.12106  0.12106
Cache Hits:0.00091  0.00091
Near Hits: 0.07409  0.07825
Not-Modified Replies:  0.00091  0.00091
DNS Lookups:   0.00094  0.00094
ICP Queries:   0.0  0.0

I
would migrate from 2.6 to 3.0 build line but follow_x_forwarded is
required for our installatons. I would appreciate any pointers or
advice on what the issue is or if I am posting this issue to the wrong
list.


Thank you in advance,

Q



- Original Message 
From: Quin Guin quing...@yahoo.com
To: squid-users@squid-cache.org
Sent: Sunday, February 8, 2009 7:54:05 PM
Subject: [squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0


Hello,

  I am currently in the process of moving from 2.6 to 2.7
and I am seeing an issue on 2 of the servers that I just installed
2.7-STABLE6 on. The dns.median_svc_time = 0.00 seconds is always 0
now matter and squid is processing request just fine.

I an
running Linux 2.6.9 kernel and did not have this issue on 2.6-STABLE22
and I am using squids internal DNS with out any issues. I just want to
make sure that I don't have any issue before rolling out 2.7 to the
rest of my squid servers.

Here is an example from one of the 2.7-STABLE6 servers:

Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.03066  0.03241
Cache Misses:  0.10857  0.10857
Cache Hits:0.0  0.0
Near Hits: 0.06286  0.06286
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.0
ICP Queries:   0.0  0.0


Regards,

Q


  



[squid-users] Squid-2.7-STABLE6 dns.median_svc_time is always 0

2009-02-08 Thread Quin Guin

Hello,

  I am currently in the process of moving from 2.6 to 2.7
and I am seeing an issue on 2 of the servers that I just installed
2.7-STABLE6 on. The dns.median_svc_time = 0.00 seconds is always 0
now matter and squid is processing request just fine.

I an
running Linux 2.6.9 kernel and did not have this issue on 2.6-STABLE22
and I am using squids internal DNS with out any issues. I just want to
make sure that I don't have any issue before rolling out 2.7 to the
rest of my squid servers.

Here is an example from one of the 2.7-STABLE6 servers:

Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.03066  0.03241
Cache Misses:  0.10857  0.10857
Cache Hits:0.0  0.0
Near Hits: 0.06286  0.06286
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.0
ICP Queries:   0.0  0.0


Regards,

Q