[squid-users] R: [squid-users] cache size and structure

2009-06-30 Thread Riccardo Castellani
>> excuse me, 1 megabit per second for a week, so :
>> 1 Mb *60*60*24*7=604.800 / 8 = 75.600 MB  about 76 GB traffic for week.
>> I'm refering both inbound and outbound traffic (1 Mbps in , 1 Mbps out).

>Well, that is just one megabit per second. That is the link bandwidth, it's
>irelevant to provide different interval, you are just confusing us (or at
>least me). We can count how much does that make for a day, week, month,
>year
>...

Matus Uhlar,
my internet link bandwidth is about 10 Mbps and I'm monitoring (by mrtg)
traffic usage of my Squid parent (A), which is about 1 Mbps for a week (so
76 GB data traffic for a week). I have also 2 squid chileds (B,C) which
communicate to squid A. Some user groups use Squid A, others use Squid B and
others use Squid C.
1 Mbps 


>so, some users access directly your squid?
yes


-Messaggio originale-
Da: Matus UHLAR - fantomas [mailto:uh...@fantomas.sk] 
Inviato: Tuesday, June 30, 2009 4:48 AM
A: squid-users@squid-cache.org
Oggetto: Re: [squid-users] cache size and structure

 My cache traffic volume (I/O) is about 2 Mbps a week with peaks of 
 3 Mbps.
>>
>> ehm, 2 megabits per socond "a week"?

On 25.06.09 23:01, Riccardo Castellani wrote:
> excuse me, 1 megabit per second for a week, so :
> 1 Mb *60*60*24*7=604.800 / 8 = 75.600 MB  about 76 GB traffic for week.
> I'm refering both inbound and outbound traffic (1 Mbps in , 1 Mbps out).

Well, that is just one megabit per second. That is the link bandwidth, it's
irelevant to provide different interval, you are just confusing us (or at
least me). We can count how much does that make for a day, week, month, year
...

>> If users acess your squid directly (not only through child proxies),
there
>> may be the need for cache_mem.
>
> some users access by child squids

so, some users access directly your squid?

>> I have "48 256" for 20GiB cache. If you take the number of files in the
>> cache directory, divide by 256 (l2 dirs) and 256 (max files in l2 dir),
you
>> should get the approximate need of l1 dirs. The average object size is
>> aropund 13KiB, which means, you should have one L1 cache_Dir per ~800MiB
of
>> cache size. Splitting small files to COSS (using min-size option for *ufs
>> and max-size for COSS) will make that even smaller number, since only big
>> iles will be placed to *ufs directory hierarchy.
>
>
> I'm sorry but I didn't understand.
> How can I enable COSS ?

by using configure option when building squid. It's only stable in squid-2.7
though. If you have 2.7 already built, check if it's not enabled anymore,
and you can reserve some disk space for COSS cache_dir.

look at http://devel.squid-cache.org/coss/index.html
and http://devel.squid-cache.org/coss/coss-notes.txt
 


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Christian Science Programming: "Let God Debug It!".



Re: [squid-users] R: [squid-users] cache size and structure

2009-06-30 Thread Matus UHLAR - fantomas
> >> excuse me, 1 megabit per second for a week, so :
> >> 1 Mb *60*60*24*7=604.800 / 8 = 75.600 MB  about 76 GB traffic for week.
> >> I'm refering both inbound and outbound traffic (1 Mbps in , 1 Mbps out).
> 
> >Well, that is just one megabit per second. That is the link bandwidth, it's
> >irelevant to provide different interval, you are just confusing us (or at
> >least me). We can count how much does that make for a day, week, month,
> >year
> >...

On 30.06.09 09:51, Riccardo Castellani wrote:
> Matus Uhlar,

Skip private replies please! I am not subscribed to the list to receive
private squid-related mail.

> my internet link bandwidth is about 10 Mbps and I'm monitoring (by mrtg)
> traffic usage of my Squid parent (A), which is about 1 Mbps for a week (so
> 76 GB data traffic for a week).

No, it is not. It may be 1mbps .OR. 76GB for week. it can't be anything
"per second for week". You may mean 1mbps during weekdays, 1mbps during
the whole week, 1mbps average. 

> I have also 2 squid chileds (B,C) which
> communicate to squid A. Some user groups use Squid A, others use Squid B and
> others use Squid C.
> 1 Mbps 
> 
> 
> >so, some users access directly your squid?
> yes

ok, are your child caches configured as neighbours to each other?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Windows 2000: 640 MB ought to be enough for anybody


[squid-users] Cache Manager (redirector) : What is Average service time ?

2009-06-30 Thread hims92

Hi, 
Can anyone please clarify what does ' average service time = 3 msec' signify
?
I get it when I access the cache managers redirector information.

Redirector Statistics:
 program: /home/zdn/bin/redirect_parallel.pl
 number running: 2 of 2
 requests sent: 155697
 replies received: 155692
 queue length: 0
 avg service time: 0 msec

Also, what is the unit of time column shown in the table form of statistics
and what does it signify?

 FD  PID # Requests  Flags   TimeOffset  Request

Thanks
-- 
View this message in context: 
http://www.nabble.com/Cache-Manager-%28redirector%29-%3A-What-is-Average-service-time---tp24267894p24267894.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Offline mode with a parent cache doesn't work

2009-06-30 Thread londoner1

On Fri, 26 Jun 2009 03:42:17 +0100 Amos Jeffries 
 wrote:
>london...@hushmail.com wrote:
>> On Thu, 25 Jun 2009 05:44:24 -0700 Amos Jeffries 
>>  wrote:
>>> london...@hushmail.com wrote:
 Hi,

 I am trying to setup a local squid cache on my laptop so that 
>I 
>>> can 
 browse intranet pages when I am on a plane and don't have 
>>> network 
 connection.

 I am therefore using the squid offline_toggle option via the 
 'squid3client mgr:offline_toggle' command.

 However, I am finding that pages are not being cached when I 
>>> have 
 my Office proxy set as a parent to my local one.

 I am running ubuntu hardy with squid 3 STABLE.1 ubuntu package 
>
>>> on 
 my local machine, with a parent proxy set using the following 
>>> lines 
 in my squid.conf:

 cache_peer proxy.nectech.co.uk parent 8080 0 no-query
 prefer_direct off

 PS: The above two lines came from Squid wiki here:
 http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-
 c050a0a0382c01fbfb9da7e9c18d58bafd4eb027

 My approach to test this is as follows:

 1 Set firefox to my localhost:3128 proxy
 2 Start squid in online mode and browse to a website.
 3 Close firefox
 4 Set local squid into offline mode using 'squid3client 
 mgr:offline_toggle' 
 5 Pull out the Network cable (to simulate being on a plane)
 6 Reopen firefox and browse to same site as in step 2. 

 After step six, the page does not load and eventually times 
>out.

 If i take out the parent cache lines, the offline mode will 
>>> work.
 Please can anyone help?
>>> 'offline_mode' means that Squid no longer attempts page 
>validation 
>>> and 
>>> fetching of new pages. It does not means supplying content 
>known 
>>> to be 
>>> past expiry.
>> 
>> Hi Amos, 
>> 
>> Ok, please forgive me if I dont understand you correctly: you 
>are 
>> saying that the local squid cache is NOT returning me the page 
>that 
>> i previously went to a few minutes ago because it thinks it has 
>> expired?
>
>Yes. From the info provided I believe that is what is happening.
>
>> 
>> why would it be expired that quickly? and yet not be expired if 
>my 
>> local squid cache does not use the parent cache?
>
>HTTP contains things called If-Modified-Since requests. When squid 
>
>thinks something is expired it either uses them or a new request 
>to 
>check for new content. Sometimes an IMS request sens back "no 
>hasn't 
>chaged" so Squid updates its information and sends you the object 
>again.
>
>It's probably that the HTML part of the page has a short expiry 
>time but 
>does not actually change (ie a dynamically created page).
>
>Even if all the images etc on the page are available for months, 
>without 
>an HTML part to say how they are displayed the page wont show up.
>

Thanks Amos, I understand now.

>
>> 
>> Also, a poster on the following blog mentioned that squid 
>offline 
>> mode ignores expiry data for cached items when in offline mode. 
>Is 
>> this not the case in latest squid 3 versions?
>> 
>> http://people.w3.org/~dom/archives/2006/09/offline-web-cache-
>with-
>> squid/
>> 
>> ps I appreciate your help on the subject.
>
>Mark who replied there and others have made a lot of improvements 
>to the 
>expiry and IMS sections of the latter Squid 2.6 releases and 2.7.
>
>The Squid-3 code in the same areas is a lot older with not all of 
>their 
>changes ported over. So I believe it would behave differently. 
>Though 
>I'm not fully clued up on the finer grains of whats going on 
>there.

Ok, so I will try the latest stable 2.6 /2.7 builds on squid and 
see what behaviour is in those builds.

Thanks for your help.

londoner1

>
>Amos
>-- 
>Please be using
>   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
>   Current Beta Squid 3.1.0.8

--
Easy-to-use, advanced features, flexible phone systems.  Click here for more 
info.
 
http://tagline.hushmail.com/fc/BLSrjkqmC5st0oAPQKnBr7VIGiuzxgDlzcAU23eNkHSF3JMG5yELn1NVDZe/



Re: [squid-users] Compress/Zipped Web pages?

2009-06-30 Thread Kinkie
On Tue, Jun 30, 2009 at 7:42 AM, Wilson Hernandez - MSD, S.
A. wrote:
> Hello.
>
> I heard a friend of mine talking about how he compresses the requested web
> pages and serves it to users (compressed) with MS ISA Server. Can that be
> done with Squid? if is true.

Yes, it can be done with squid 3.1.0 and the GZIP eCAP module.
It's all a bit in beta tho..

-- 
/kinkie


[squid-users] Squid with TPROXY slow response to requests

2009-06-30 Thread trasor
I am running squid with tproxy having followed the wiki Features/TPROXY 
in bridged mode.  Squid is working properly with no evident errors in 
the logs, however initial requests are extremely slow to respond.  i.e 
the first request for yahoo.com may take several seconds  to retrieve 
the information and return a page to the requester, the second request 
is nearly instantaneous.  And as long as the page is requested over and 
over the page will remain instantaneous.  If the page is not requested 
for say longer than 45 seconds, the response time goes back to taking 
several seconds to respond.  I believe this not to necessarily be a 
squid issue, but perhaps a routing issue as if I turn off the ebtables 
and iptables request/response times drastically improve.  Has anyone 
else seen this phenomenon?


*server specs:*
Fedora 7 64bit
16GB ram
1.5TB raid 5 configured
2 10/100/1000 ethernet bridged

*squid -v:*

Squid Cache: Version 3.1.0.6
configure options: '--enable-linux-netfilter' 
'--enable-removal-policies=heap' 'enable-storeio=aufs' 
--with-squid=/usr/src/squid-3.1.0.6 --enable-ltdl-convienience


*squid.conf:*

visible_hostname site.domain.com 
acl manager proto cache_object

acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src X.X.X.X/20# RFC1918 possible internal network
acl localnet src X.X.X.X/20# RFC1918 possible internal network
acl localnet src X.X.X.X/21 # RFC1918 possible internal network
acl localnet src X.X.X.X/21 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access deny all
http_port 3128
http_port 3129 tproxy
hierarchy_stoplist cgi-bin ?
cache allow all
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
#refresh_pattern ^ftp: 1440 20% 10080
#refresh_pattern ^gopher: 1440 0% 1440
#refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 
override-expire ignore-no-cache ignore-no-store ignore-private 
ignore-reload reload-into-ims
#refresh_pattern -i 
\.(iso|img|avi|wav|mp3|mp4|mpg|mpeg|swf|flv|x-flv|dll|do|xsp|wma|wmv|xml|asp|aspx)$ 
43200 150% 432000 override-expire ignore-no-cache ignore-no-store 
ignore-private ignore-reload reload-into-ims
#refresh_pattern -i 
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|txt|tiff|pdf)$ 10080 200% 
43200 override-expire ignore-no-cache ignore-no-store ignore-private 
ignore-reload reload-into-ims

#refresh_pattern -i \.index.(html|htm)$ 0 40% 10080 reload-into-ims
#refresh_pattern -i \.(html|htm|css|js|jsp|php|jtp|mspx|pl)$ 1440 40% 40320
#refresh_pattern . 0 40% 40320
cache_mem 1 MB
memory_replacement_policy heap LFUDA
maximum_object_size 300 MB
coredump_dir /usr/local/squid/var/cache
access_log /var/logs/access.log squid
cache_log /var/logs/cache.log
cache_store_log none
cache_replacement_policy heap LFUDA
cache_dir aufs /cache 1 16 256

*ebtables and iptables:*

ebtables -t broute -A BROUTING -i eth1 -p ipv4 --ip-proto tcp --ip-dport 
80 -j redirect --redirect-target DROP
ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-proto tcp --ip-sport 
80 -j redirect --redirect-target DROP

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -i eth1 -p tcp --dport 80 -j TPROXY 
--tproxy-mark 0x1/0x1 --on-port 3129

echo 1 > /proc/sys/net/ipv4/ip_forward
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
cd /proc/sys/net/bridge/
for i in *
do
 echo 0 > $i
done
unset i


If running strictly bridged without ebtables or iptables, requests are 
processed at 'normal' speed but nothing is passed on to squid for 
cacheing.  I went so far as running squid as a memory only cache and 
tried running with squid-3.1.0.8 with the same results.  Any help 
speeding this up would be most appreciated.


Tom


[squid-users] Antwort: Re: [squid-users] memory usage for squid-3.0.STABLE15

2009-06-30 Thread Martin . Pichlmaier
I checked -- cached objects are not re-checked, at least not with two or 
three hours.

But the memory usage is higher still without icap while the cache is still 
filling -- but this
may due to the fact that I configured squid to cache objects only up to 1 
MB and icap scans larger objects, too.

Additionally squid does not know the icap scan limit, therefore every file 
will be sent to the ICAP server.
So the higher memory usage will be probably the need of caching large 
objects, too, at least
until ICAP has them scanned.
Your thought with the two memory buffers may be another reason.

OK, thank you, now I understand more clearly the extensive memory usage 
with ICAP.
I will have to rethink whether ICAP is really the best way for what I 
want.

Thank you again for your insight!

Martin



On 25.06.09 15:39, martin.pichlma...@continental-corporation.com wrote:
> I have a question regarding memory usage for squid. I have 4 proxies, 
each 
> has about 200-400 req/s and 2-5 MB/s with ntlm_auth and about 1000 lines 

> of acl,
> squid version is 3.0.STABLE15 on Redhat AS 5 Linux.
> They are busy servers and therefore have no disk cache but memory cache 
of 
> 6144 MB (6 GB) and provide the internet access for some 10k users.
[...]
> The squid process needed about 7.5 to 7.8 GB and that seems reasonable.
> After we enabled ICAP (c_icap with clamav virus scanning) the memory 
usage 
> of the squid process rose to about 12.8 GB.
> 
> Is this a normal behaviour with squid when icap is enabled?

It's quite possible that when icap is enables, squid must reserve some
memory for icap i/o buffers.
Although only one buffer may be teoretically needed, it's possible that
squid uses two of them.

the user memory highly depends on number and size of the uncached objects
being accessed (i think cached objects aren't re-checked, are they?)


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
WinError #98652: Operation completed successfully.




[squid-users] Squid with Perl script are not working

2009-06-30 Thread fadi.sbat
Hello,


I am trying to Cache Youtube Videos with my SquidNT 2.7STABLE6 but there is 
nothing working .. I also installed strawberry-perl-5.10.0.4.exe .. I wish if 
anyone can help it .


Here is my squid.conf :
--
# ACCESS CONTROLS
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl admin src 192.168.1.65/255.255.255.255
acl our_networks src 192.168.5.0/24 
acl SSL_ports port 443 563 # https, snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 5004 #
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl CONNECT method CONNECT
acl YMPort port 5050
acl youtube dstdomain .youtube.com
acl PURGE method purge

# TAG: http_access
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow our_networks
http_access allow localhost
http_access deny all 
icp_access allow all
icp_access deny all

# TAG: http_port
http_port 3128 transparent

cache_mem 1000 MB
maximum_object_size_in_memory 64 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir ufs E:/SquidCache/cache 6 64 256
maximum_object_size 300 MB
#half_closed_clients off
server_persistent_connections off 
#client_persistent_connections off 
cache_swap_low 98
cache_swap_high 99

#peer parent
#cache_peer random.us.ircache.net parent 3128 3130 
login=tawonxx...@yahoo.co.id:xxxBicbin

# LOGFILE OPTIONS
access_log c:/squid/var/logs/access.log squid
cache_log c:/squid/var/logs/cache.log
cache_store_log none
logfile_rotate 2
emulate_httpd_log off
log_fqdn off
ftp_passive on
storeurl_rewrite_program C:/strawberry/perl/bin/perl.exe 
c:/squid/etc/store_url_rewrite.pl
acl store_rewrite_list url_regex 
\/(get_video\?|videodownload\?|videoplayback\?id) 
\.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|flv)\? \/ads\? 
acl store_rewrite_list url_regex ^http://(.*?)/get_video\?
acl store_rewrite_list url_regex ^http://(.*?)/watch\?
acl store_rewrite_list url_regex ^http://(.*?)/videodownload\?
acl store_rewrite_list url_regex 
^http://i(.*?).photobucket.com/albums/(.*?)/(.*?)/(.*?)\?
acl store_rewrite_list url_regex 
^http://vid(.*?).photobucket.com/albums/(.*?)/(.*?)\?
acl store_rewrite_list url_regex 
^http://.*?.\files\.youporn\.com/.*?/.*?/.*?\.flv\?.*
acl store_rewrite_list url_regex mt.google.com mt0.google.com mt1.google.com 
mt2.google.com
acl store_rewrite_list url_regex mt3.google.com
acl store_rewrite_list url_regex kh.google.com kh0.google.com kh1.google.com 
kh2.google.com
acl store_rewrite_list url_regex kh3.google.com
acl store_rewrite_list url_regex kh.google.com.au kh0.google.com.au 
kh1.google.com.au
acl store_rewrite_list url_regex kh2.google.com.au kh3.google.com.au

acl store_rewrite_list url_regex 
^http:\/\/([A-Za-z-]+[0-9]+)*\.[A-Za-z]*\.[A-Za-z]*
acl store_rewrite_list url_regex ^http:\/\/[a-z]+[0-9]\.google\.com 
doubleclick\.net

# This needs to be narrowed down quite a bit!
acl store_rewrite_list url_regex .youtube.com

acl youtube_query url_regex -i \.youtube\.com\/get_video
acl youtube_query url_regex -i \.youtube\.com\/watch
acl youtube_query url_regex -i 
\.cache[a-z0-9]?[a-z0-9]?[a-z0-9]?\.googlevideo\.com\/videoplayback
acl youtube_query url_regex -i 
\.cache[a-z0-9]?[a-z0-9]?[a-z0-9]?\.googlevideo\.com\/get_video
acl youtube_query url_regex -i 
\.cache[a-z0-9]?[a-z0-9]?[a-z0-9]?\.googlevideo\.com\/watch
acl youtube_deny url_regex -i http:\/\/[a-z][a-z]\.youtube\.com
acl metacafe_query url_regex v.mccont.com
acl dailymotion_query url_regex -i proxy\-[0-9][0-9]\.dailymotion\.com\/
acl google_query url_regex vp.video.google.com
acl redtube_query url_regex dl.redtube.com
acl xtube_query url_regex -i 
[a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv
acl vimeo_query url_regex -i bitcast\.vimeo\.com\/vimeo\/videos\/
acl wrzuta_query url_regex -i va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]?
url_rewrite_access deny youtube_deny
url_rewrite_access allow youtube_query
url_rewrite_access allow metacafe_query
url_rewrite_access allow dailymotion_query
url_rewrite_access allow google_query
url_rewrite_access allow redtube_query
url_rewrite_access allow xtube_query
url_rewrite_access allow vimeo_query
url_rewrite_access allow wrzuta_query


cache allow youtube
cache allow store_rewrite_list
cache allow all

acl Facebook urlpath_regex  .facebook.com
acl Ghaneli urlpath_regex  .ghaneli.net
acl Nogomi urlpath_regex  .nogomi.com
acl FBCDN urlpath_regex  .fbcdn.net
acl TAGSTAT urlpath_regex  .tagstat.com
acl Yahoo urlpath_regex  .yahoo.com
ac

Re: [squid-users] Squid with TPROXY slow response to requests

2009-06-30 Thread Amos Jeffries

trasor wrote:
I am running squid with tproxy having followed the wiki Features/TPROXY 
in bridged mode.  Squid is working properly with no evident errors in 
the logs, however initial requests are extremely slow to respond.  i.e 
the first request for yahoo.com may take several seconds  to retrieve 
the information and return a page to the requester, the second request 
is nearly instantaneous.  And as long as the page is requested over and 
over the page will remain instantaneous.  If the page is not requested 
for say longer than 45 seconds, the response time goes back to taking 
several seconds to respond.  I believe this not to necessarily be a 
squid issue, but perhaps a routing issue as if I turn off the ebtables 
and iptables request/response times drastically improve.  Has anyone 
else seen this phenomenon?


*server specs:*
Fedora 7 64bit
16GB ram
1.5TB raid 5 configured
2 10/100/1000 ethernet bridged

*squid -v:*

Squid Cache: Version 3.1.0.6
configure options: '--enable-linux-netfilter' 
'--enable-removal-policies=heap' 'enable-storeio=aufs' 
--with-squid=/usr/src/squid-3.1.0.6 --enable-ltdl-convienience


*squid.conf:*

visible_hostname site.domain.com acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src X.X.X.X/20# RFC1918 possible internal network
acl localnet src X.X.X.X/20# RFC1918 possible internal network
acl localnet src X.X.X.X/21 # RFC1918 possible internal network
acl localnet src X.X.X.X/21 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access deny all
http_port 3128
http_port 3129 tproxy
hierarchy_stoplist cgi-bin ?
cache allow all
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320
#refresh_pattern ^ftp: 1440 20% 10080
#refresh_pattern ^gopher: 1440 0% 1440
#refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 
override-expire ignore-no-cache ignore-no-store ignore-private 
ignore-reload reload-into-ims
#refresh_pattern -i 
\.(iso|img|avi|wav|mp3|mp4|mpg|mpeg|swf|flv|x-flv|dll|do|xsp|wma|wmv|xml|asp|aspx)$ 
43200 150% 432000 override-expire ignore-no-cache ignore-no-store 
ignore-private ignore-reload reload-into-ims
#refresh_pattern -i 
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|txt|tiff|pdf)$ 10080 200% 
43200 override-expire ignore-no-cache ignore-no-store ignore-private 
ignore-reload reload-into-ims

#refresh_pattern -i \.index.(html|htm)$ 0 40% 10080 reload-into-ims
#refresh_pattern -i \.(html|htm|css|js|jsp|php|jtp|mspx|pl)$ 1440 40% 40320
#refresh_pattern . 0 40% 40320
cache_mem 1 MB
memory_replacement_policy heap LFUDA
maximum_object_size 300 MB
coredump_dir /usr/local/squid/var/cache
access_log /var/logs/access.log squid
cache_log /var/logs/cache.log
cache_store_log none
cache_replacement_policy heap LFUDA
cache_dir aufs /cache 1 16 256

*ebtables and iptables:*

ebtables -t broute -A BROUTING -i eth1 -p ipv4 --ip-proto tcp --ip-dport 
80 -j redirect --redirect-target DROP
ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-proto tcp --ip-sport 
80 -j redirect --redirect-target DROP


I'm a little fuzzy on when BROUTING takes place. DROP does not look that 
good though. Can you clarify what the above does in your understanding 
please?



Going by the packet flow map 
http://l7-filter.sourceforge.net/PacketFlow.png I would think the 
packets on a bridge device naturally flow along the bottom line of 
processing (blue). Which does include the iptables mangle PREROUTING 
table containing the TPROXY and DIVERT rules.


Have you tried it without the special ebtables rules, only the iptables 
rules?


There _may_ be some need for adding an ACCEPT for port 80 stuff in the 
ebtables filter INPUT/OUTPUT tables. But other than that I would expect 
the iptables mangle stuff to be all.



iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -i eth1 -p tcp --dport 80 -j TPROXY 
--tproxy-mark 0x1/0x1 --on-port 3129

echo 1 > /proc/sys/net/ipv4/ip_forward
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
cd /proc/sys/net/bridge/
for i in *
do
 echo 0 > $i
done
unset i


If running

Re: [squid-users] Squid with Perl script are not working

2009-06-30 Thread Amos Jeffries

fadi.sbat wrote:

Hello,


I am trying to Cache Youtube Videos with my SquidNT 2.7STABLE6 but there is nothing working .. 

>
I also installed strawberry-perl-5.10.0.4.exe .. I wish if anyone can 
help it .




SquidNT? no such thing any more. Not since early Squid 2.6 integrated 
Windows support. There are a few fakes of extremely dubious origin 
floating around under that banner though.


Please ensure you have the official Squid binary built for windows 
either by yourself or http://squid.acmeconsulting.it/.  They are the 
_only_ third-party supplier of Windows Squid binaries currently known 
and accepted by the Squid project team.



Also for your youtube rules, check the youtube caching discussion page. 
It's being updated all the time as youtube change their systems.

It seems to have a newer set of config and rewriter than you are using.
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion


Amos
Squid-3 Maintainer



Here is my squid.conf :
--
# ACCESS CONTROLS
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl admin src 192.168.1.65/255.255.255.255
acl our_networks src 192.168.5.0/24 
acl SSL_ports port 443 563 # https, snews

acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 5004 #
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl CONNECT method CONNECT
acl YMPort port 5050
acl youtube dstdomain .youtube.com
acl PURGE method purge

# TAG: http_access
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow our_networks
http_access allow localhost
http_access deny all 
icp_access allow all

icp_access deny all

# TAG: http_port
http_port 3128 transparent

cache_mem 1000 MB
maximum_object_size_in_memory 64 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir ufs E:/SquidCache/cache 6 64 256
maximum_object_size 300 MB
#half_closed_clients off
server_persistent_connections off 
#client_persistent_connections off 
cache_swap_low 98

cache_swap_high 99

#peer parent
#cache_peer random.us.ircache.net parent 3128 3130 
login=tawonxx...@yahoo.co.id:xxxBicbin

# LOGFILE OPTIONS
access_log c:/squid/var/logs/access.log squid
cache_log c:/squid/var/logs/cache.log
cache_store_log none
logfile_rotate 2
emulate_httpd_log off
log_fqdn off
ftp_passive on
storeurl_rewrite_program C:/strawberry/perl/bin/perl.exe 
c:/squid/etc/store_url_rewrite.pl
acl store_rewrite_list url_regex \/(get_video\?|videodownload\?|videoplayback\?id) \.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|flv)\? \/ads\? 
acl store_rewrite_list url_regex ^http://(.*?)/get_video\?

acl store_rewrite_list url_regex ^http://(.*?)/watch\?
acl store_rewrite_list url_regex ^http://(.*?)/videodownload\?
acl store_rewrite_list url_regex 
^http://i(.*?).photobucket.com/albums/(.*?)/(.*?)/(.*?)\?
acl store_rewrite_list url_regex 
^http://vid(.*?).photobucket.com/albums/(.*?)/(.*?)\?
acl store_rewrite_list url_regex 
^http://.*?.\files\.youporn\.com/.*?/.*?/.*?\.flv\?.*
acl store_rewrite_list url_regex mt.google.com mt0.google.com mt1.google.com 
mt2.google.com
acl store_rewrite_list url_regex mt3.google.com
acl store_rewrite_list url_regex kh.google.com kh0.google.com kh1.google.com 
kh2.google.com
acl store_rewrite_list url_regex kh3.google.com
acl store_rewrite_list url_regex kh.google.com.au kh0.google.com.au 
kh1.google.com.au
acl store_rewrite_list url_regex kh2.google.com.au kh3.google.com.au

acl store_rewrite_list url_regex 
^http:\/\/([A-Za-z-]+[0-9]+)*\.[A-Za-z]*\.[A-Za-z]*
acl store_rewrite_list url_regex ^http:\/\/[a-z]+[0-9]\.google\.com 
doubleclick\.net

# This needs to be narrowed down quite a bit!
acl store_rewrite_list url_regex .youtube.com

acl youtube_query url_regex -i \.youtube\.com\/get_video
acl youtube_query url_regex -i \.youtube\.com\/watch
acl youtube_query url_regex -i 
\.cache[a-z0-9]?[a-z0-9]?[a-z0-9]?\.googlevideo\.com\/videoplayback
acl youtube_query url_regex -i 
\.cache[a-z0-9]?[a-z0-9]?[a-z0-9]?\.googlevideo\.com\/get_video
acl youtube_query url_regex -i 
\.cache[a-z0-9]?[a-z0-9]?[a-z0-9]?\.googlevideo\.com\/watch
acl youtube_deny url_regex -i http:\/\/[a-z][a-z]\.youtube\.com
acl metacafe_query url_regex v.mccont.com
acl dailymotion_query url_regex -i proxy\-[0-9][0-9]\.dailymotion\.com\/
acl google_query url_regex vp.video.google.com
acl redtube_query url_regex dl.redtube.com
acl xtube_query url_regex -i 
[a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv
acl vimeo_

[squid-users] squid becomes very slow during peak hours

2009-06-30 Thread goody goody

Hi there,

I am running squid 2.5 on freebsd 7, and my squid box respond very slow during 
peak hours. my squid machine have twin dual core processors, 4 ram and 
following hdds.

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/da0s1a9.7G241M8.7G 3%/
devfs  1.0K1.0K  0B   100%/dev
/dev/da0s1f 73G 35G 32G52%/cache1
/dev/da0s1g 73G2.0G 65G 3%/cache2
/dev/da0s1e 39G2.5G 33G 7%/usr
/dev/da0s1d 58G6.4G 47G12%/var


below are the status and settings i have done. i need further guidance to  
improve the box.

last pid: 50046;  load averages:  1.02,  1.07,  1.02
up 

7+20:35:29  15:21:42
26 processes:  2 running, 24 sleeping
CPU states: 25.4% user,  0.0% nice,  1.3% system,  0.8% interrupt, 72.5% idle
Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache, 112M Buf, 3708K Free
Swap: 4096M Total, 20K Used, 4096M Free

  PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
49819 sbt1 1050   360M   351M CPU3   3  92:43 98.14% squid
  487 root1  960  4372K  2052K select 0  57:00  3.47% natd
  646 root1  960 16032K 12192K select 3  54:28  0.00% snmpd
49821 sbt1  -40  3652K  1048K msgrcv 0   0:13  0.00% diskd
49822 sbt1  -40  3652K  1048K msgrcv 0   0:10  0.00% diskd
49864 root1  960  3488K  1536K CPU2   1   0:04  0.00% top
  562 root1  960  3156K  1008K select 0   0:04  0.00% syslogd
  717 root1   80  3184K  1048K nanslp 0   0:02  0.00% cron
49631 x-man   1  960  8384K  2792K select 0   0:01  0.00% sshd
49635 root1  200  5476K  2360K pause  0   0:00  0.00% csh
49628 root1   40  8384K  2776K sbwait 1   0:00  0.00% sshd
  710 root1  960  5616K  2172K select 1   0:00  0.00% sshd
49634 x-man   1   80  3592K  1300K wait   1   0:00  0.00% su
49820 sbt1  -80  1352K   496K piperd 3   0:00  0.00% unlinkd
49633 x-man   1   80  3456K  1280K wait   3   0:00  0.00% sh
  765 root1   50  3156K   872K ttyin  1   0:00  0.00% getty
  766 root1   50  3156K   872K ttyin  2   0:00  0.00% getty
  767 root1   50  3156K   872K ttyin  2   0:00  0.00% getty
  769 root1   50  3156K   872K ttyin  3   0:00  0.00% getty
  771 root1   50  3156K   872K ttyin  1   0:00  0.00% getty
  770 root1   50  3156K   872K ttyin  0   0:00  0.00% getty
  768 root1   50  3156K   872K ttyin  3   0:00  0.00% getty
  772 root1   50  3156K   872K ttyin  1   0:00  0.00% getty
47303 root1   80  8080K  3560K wait   1   0:00  0.00% squid
  426 root1  960  1888K   420K select 0   0:00  0.00% devd
  146 root1  200  1356K   668K pause  0   0:00  0.00% adjkerntz


pxy# iostat
  tty da0pass0 cpu
 tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0  126 12.79   5  0.06   0.00   0  0.00   4  0  1  0 95

pxy# vmstat
 procs  memory  pagedisks faults  cpu
 r b w avmfre   flt  re  pi  pofr  sr da0 pa0   in   sy   cs us sy 
id
 1 3 0  458044 10326812   0   0   030   5   0   0  273 1721 2553  4  1 
95

pxy# netstat -am
1376/1414/2790 mbufs in use (current/cache/total)
1214/1372/2586/25600 mbuf clusters in use (current/cache/total/max)
1214/577 mbuf+clusters out of packet secondary zone in use (current/cache)
147/715/862/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
3360K/5957K/9317K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/7/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines


"netstat -an | grep "TIME_WAIT" | more " command 17 scroll pages of crt.

some lines from squid.conf
cache_mem 256 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF

cache_swap_low 80
cache_swap_high 90

cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64

cache_log /var/log/squid25/cache.log
cache_access_log /var/log/squid25/access.log
cache_store_log none

half_closed_clients off
maximum_object_size 1024 KB 

pxy# sysctl -a | grep maxproc
kern.maxproc: 6164
kern.maxprocperuid: 5547
kern.ipc.somaxconn: 1024
kern.maxfiles: 12328
kern.maxfilesperproc: 11095
net.inet.ip.portrange.randomtime: 45
net.inet.ip.portrange.randomcps: 10
net.inet.ip.portrange.randomized: 1
net.inet.ip.p

Re: [squid-users] Serving from the cache when the origin server crashes

2009-06-30 Thread Amos Jeffries

Elli Albek wrote:

Thanks, this is basically the solution. Can I do header override in squid to
add this if the origin does not send this header?


Not early enough for it to matter. The header alteration done by Squid 
happens just prior to the reply being sent to the client. The caching 
decisions happen far earlier than this header addition.




Would it make sense for squid to keep the state of the origin server for a
few seconds, so it dose not have to contact it when it is not accessible on
each individual request? Something like:

1. If origin responds to a request, set origin state to active.
2. If origin does not respond to a request, set origin state to "not
active". This expires after a few seconds.
3. If origin state is not defined, it is assumed to be active.
4. If origin state is "not active", do not forward more than one request at
a time.

Sounds little bit like a can of worms :)


Squid already does. The limit for the current stable releases is that 10 
failed requests are needed to mark a peer dead. This is fixed to a 
configurable measure in the current development releases Squid 3.1.0.9+ 
beta, 2.HEAD(2.8 alpha), 3.HEAD(3.2 alpha).


All squid also do these:

 The failed responses from each DNS located IP are marked and the IP is 
not tried again. Turn balance_on_multiple_ip off for this to become visible.


 If --enable-icmp is used RTT times between Squid and each source 
server are recorded for future routing selections.


 The monitor* options to cache_peer turn on active HTTP-level 'ping' 
operations to each peer at regular intervals for quicker dead detection 
on low-throughput networks and quicker re-alive detection in all cases.






Elli

-Original Message-
From: Chris Woodfield [mailto:rek...@semihuman.com] 
Sent: Thursday, June 25, 2009 9:36 AM

To: Amos Jeffries; Myles Merrell
Cc: Squid Users
Subject: Re: [squid-users] Serving from the cache when the origin server
crashes

Take a careful look at the stale-if-error Cache-control header, as  
described below:


http://tools.ietf.org/html/draft-nottingham-http-stale-if-error-01

In a nutshell, this allows you to force squid to serve up objects if  
the origin is down, even if those objects are stale, for a  
configurable number of seconds after the object's original stale  
timestamp.


However, you'll still have the overhead of squid attempting to reach  
the origin, failing, then serving up the stale object on each request  
- as such, I'd highly recommend making sure that if you use this, you  
shut down the server in such a way that it generates an ICMP  
Destination Unreachable reply when squid attempts to connect.
If you take the server off the air completely, squid will have to wait  
for network timeouts before returning the stale content, which your  
users will notice.


Of course, you'll need to make sure that squid has your site cached in  
its entirety - it can't retrieve not-cached content from a dead  
server :)


Amos, can you confirm that 3.x supports this? I'm using it in 2.7.

-C



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9


[squid-users] Changes to squid-cache.org FTP servers

2009-06-30 Thread Amos Jeffries


There have been some changes to the information provided on the Squid 
Project ftp.squid-cache.org server.


The squid-2 and squid-3 directories have been replaced with a single 
squid/ directory and the DEVEL and STABLE separation has been removed.


All current release source packages are to be found at the top level 
together.


3.1 packages are ftp://ftp.squid-cache.org/pub/squid/squid-3.1.*.gz
3.0 packages are ftp://ftp.squid-cache.org/pub/squid/squid-3.0.*.gz
2.7 packages are ftp://ftp.squid-cache.org/pub/squid/squid-2.7.*.gz


Binary packages, if provided for current releases, are under 
ftp://ftp.squid-cache.org/pub/squid/binaries//*



Older versions of packages which are deprecated by the project for any 
reason are moved below ftp://ftp.squid-cache.org/pub/archive/



These changes are now visible on our FTP servers

 ftp://ftp.squid-cache.org/pub/squid/

and the mirrors. For a list of mirror sites see

 http://www.squid-cache.org/Download/mirrors.dyn



The Squid HTTP Proxy team


[squid-users] proxy become very slow during peak time

2009-06-30 Thread abdul sami
Hi there,

I am running squid 2.5 on freebsd 7, and my squid box respond very
slow during peak hours. my squid machine have twin dual core
processors, 4 ram and following hdds.

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/da0s1a9.7G241M8.7G 3%/
devfs  1.0K1.0K  0B   100%/dev
/dev/da0s1f 73G 35G 32G52%/cache1
/dev/da0s1g 73G2.0G 65G 3%/cache2
/dev/da0s1e 39G2.5G 33G 7%/usr
/dev/da0s1d 58G6.4G 47G12%/var


below are the status and settings i have done. i need further guidance
to  improve the box.

last pid: 50046;  load averages:  1.02,  1.07,  1.02
 up

7+20:35:29  15:21:42
26 processes:  2 running, 24 sleeping
CPU states: 25.4% user,  0.0% nice,  1.3% system,  0.8% interrupt, 72.5% idle
Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache, 112M Buf, 3708K Free
Swap: 4096M Total, 20K Used, 4096M Free

  PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
49819 sbt1 1050   360M   351M CPU3   3  92:43 98.14% squid
  487 root1  960  4372K  2052K select 0  57:00  3.47% natd
  646 root1  960 16032K 12192K select 3  54:28  0.00% snmpd
49821 sbt1  -40  3652K  1048K msgrcv 0   0:13  0.00% diskd
49822 sbt1  -40  3652K  1048K msgrcv 0   0:10  0.00% diskd
49864 root1  960  3488K  1536K CPU2   1   0:04  0.00% top
  562 root1  960  3156K  1008K select 0   0:04  0.00% syslogd
  717 root1   80  3184K  1048K nanslp 0   0:02  0.00% cron
49631 x-man   1  960  8384K  2792K select 0   0:01  0.00% sshd
49635 root1  200  5476K  2360K pause  0   0:00  0.00% csh
49628 root1   40  8384K  2776K sbwait 1   0:00  0.00% sshd
  710 root1  960  5616K  2172K select 1   0:00  0.00% sshd
49634 x-man   1   80  3592K  1300K wait   1   0:00  0.00% su
49820 sbt1  -80  1352K   496K piperd 3   0:00  0.00% unlinkd
49633 x-man   1   80  3456K  1280K wait   3   0:00  0.00% sh
  765 root1   50  3156K   872K ttyin  1   0:00  0.00% getty
  766 root1   50  3156K   872K ttyin  2   0:00  0.00% getty
  767 root1   50  3156K   872K ttyin  2   0:00  0.00% getty
  769 root1   50  3156K   872K ttyin  3   0:00  0.00% getty
  771 root1   50  3156K   872K ttyin  1   0:00  0.00% getty
  770 root1   50  3156K   872K ttyin  0   0:00  0.00% getty
  768 root1   50  3156K   872K ttyin  3   0:00  0.00% getty
  772 root1   50  3156K   872K ttyin  1   0:00  0.00% getty
47303 root1   80  8080K  3560K wait   1   0:00  0.00% squid
  426 root1  960  1888K   420K select 0   0:00  0.00% devd
  146 root1  200  1356K   668K pause  0   0:00  0.00% adjkerntz


pxy# iostat
  tty da0pass0 cpu
 tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0  126 12.79   5  0.06   0.00   0  0.00   4  0  1  0 95

pxy# vmstat
 procs  memory  pagedisks faults  cpu
 r b w avmfre   flt  re  pi  pofr  sr da0 pa0   in   sy
cs us sy id
 1 3 0  458044 10326812   0   0   030   5   0   0  273 1721
2553  4  1 95

pxy# netstat -am
1376/1414/2790 mbufs in use (current/cache/total)
1214/1372/2586/25600 mbuf clusters in use (current/cache/total/max)
1214/577 mbuf+clusters out of packet secondary zone in use (current/cache)
147/715/862/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
3360K/5957K/9317K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/7/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines


"netstat -an | grep "TIME_WAIT" | more " command 17 scroll pages of crt.

some lines from squid.conf
cache_mem 256 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF

cache_swap_low 80
cache_swap_high 90

cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64

cache_log /var/log/squid25/cache.log
cache_access_log /var/log/squid25/access.log
cache_store_log none

half_closed_clients off
maximum_object_size 1024 KB

pxy# sysctl -a | grep maxproc
kern.maxproc: 6164
kern.maxprocperuid: 5547
kern.ipc.somaxconn: 1024
kern.maxfiles: 12328
kern.maxfilesperproc: 11095
net.inet.ip.portrange.randomtime: 45
net.inet.ip.portrange.randomcps: 10
net.inet.ip.portrange.randomized: 1
net.inet.ip.portrange.reservedlow: 0
net.i

Re: [squid-users] squid becomes very slow during peak hours

2009-06-30 Thread Adrian Chadd
Upgrade to a later Squid version!



adrian

2009/6/30 goody goody :
>
> Hi there,
>
> I am running squid 2.5 on freebsd 7, and my squid box respond very slow 
> during peak hours. my squid machine have twin dual core processors, 4 ram and 
> following hdds.
>
> Filesystem     Size    Used   Avail Capacity  Mounted on
> /dev/da0s1a    9.7G    241M    8.7G     3%    /
> devfs          1.0K    1.0K      0B   100%    /dev
> /dev/da0s1f     73G     35G     32G    52%    /cache1
> /dev/da0s1g     73G    2.0G     65G     3%    /cache2
> /dev/da0s1e     39G    2.5G     33G     7%    /usr
> /dev/da0s1d     58G    6.4G     47G    12%    /var
>
>
> below are the status and settings i have done. i need further guidance to  
> improve the box.
>
> last pid: 50046;  load averages:  1.02,  1.07,  1.02                          
>                               up
>
> 7+20:35:29  15:21:42
> 26 processes:  2 running, 24 sleeping
> CPU states: 25.4% user,  0.0% nice,  1.3% system,  0.8% interrupt, 72.5% idle
> Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache, 112M Buf, 3708K Free
> Swap: 4096M Total, 20K Used, 4096M Free
>
>  PID USERNAME      THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
> 49819 sbt    1 105    0   360M   351M CPU3   3  92:43 98.14% squid
>  487 root            1  96    0  4372K  2052K select 0  57:00  3.47% natd
>  646 root            1  96    0 16032K 12192K select 3  54:28  0.00% snmpd
> 49821 sbt    1  -4    0  3652K  1048K msgrcv 0   0:13  0.00% diskd
> 49822 sbt    1  -4    0  3652K  1048K msgrcv 0   0:10  0.00% diskd
> 49864 root            1  96    0  3488K  1536K CPU2   1   0:04  0.00% top
>  562 root            1  96    0  3156K  1008K select 0   0:04  0.00% syslogd
>  717 root            1   8    0  3184K  1048K nanslp 0   0:02  0.00% cron
> 49631 x-man           1  96    0  8384K  2792K select 0   0:01  0.00% sshd
> 49635 root            1  20    0  5476K  2360K pause  0   0:00  0.00% csh
> 49628 root            1   4    0  8384K  2776K sbwait 1   0:00  0.00% sshd
>  710 root            1  96    0  5616K  2172K select 1   0:00  0.00% sshd
> 49634 x-man           1   8    0  3592K  1300K wait   1   0:00  0.00% su
> 49820 sbt    1  -8    0  1352K   496K piperd 3   0:00  0.00% unlinkd
> 49633 x-man           1   8    0  3456K  1280K wait   3   0:00  0.00% sh
>  765 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
>  766 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
>  767 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
>  769 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
>  771 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
>  770 root            1   5    0  3156K   872K ttyin  0   0:00  0.00% getty
>  768 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
>  772 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
> 47303 root            1   8    0  8080K  3560K wait   1   0:00  0.00% squid
>  426 root            1  96    0  1888K   420K select 0   0:00  0.00% devd
>  146 root            1  20    0  1356K   668K pause  0   0:00  0.00% adjkerntz
>
>
> pxy# iostat
>      tty             da0            pass0             cpu
>  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
>   0  126 12.79   5  0.06   0.00   0  0.00   4  0  1  0 95
>
> pxy# vmstat
>  procs      memory      page                    disks     faults      cpu
>  r b w     avm    fre   flt  re  pi  po    fr  sr da0 pa0   in   sy   cs us 
> sy id
>  1 3 0  458044 103268    12   0   0   0    30   5   0   0  273 1721 2553  4  
> 1 95
>
> pxy# netstat -am
> 1376/1414/2790 mbufs in use (current/cache/total)
> 1214/1372/2586/25600 mbuf clusters in use (current/cache/total/max)
> 1214/577 mbuf+clusters out of packet secondary zone in use (current/cache)
> 147/715/862/12800 4k (page size) jumbo clusters in use 
> (current/cache/total/max)
> 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
> 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
> 3360K/5957K/9317K bytes allocated to network (current/cache/total)
> 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for jumbo clusters denied (4k/9k/16k)
> 0/7/6656 sfbufs in use (current/peak/max)
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
> 0 calls to protocol drain routines
>
>
> "netstat -an | grep "TIME_WAIT" | more " command 17 scroll pages of crt.
>
> some lines from squid.conf
> cache_mem 256 MB
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap GDSF
>
> cache_swap_low 80
> cache_swap_high 90
>
> cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
> cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64
>
> cache_log /var/log/squid25/cache.log
> cache_access_log /var/log/squid25/access.log
> cache_store_log none
>
> half_closed_clients off
> maximum_object_size 1024 KB
>
> pxy# sysctl -a | grep ma

[squid-users] Question about: Changes to squid-cache.org FTP servers

2009-06-30 Thread Silamael
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello there,

Will the paths on the HTTP-Server also change?

- -- Matthias
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkpKKn0ACgkQGgHcOSur6dRwKQCcCTqqLNuUOu7X9YOYHpiDWvbu
igMAnj/xS44h5SmZM5tGa64IL7kryyUl
=9Xrh
-END PGP SIGNATURE-


[squid-users] Top users graphs with Sarg

2009-06-30 Thread Henrique Machado
Good morning,

I know this isn´t the exact list for this question, but since Sarg and
Squid are very closely attached, I presumed this would be my best shot
so far.
I know that I can individual topusers graphics. Does someone knows how
I can put all the topusers in one single graphic?
Or does anyone knows any other software that does something like it?

Thank you all

Henrique


Re: [squid-users] acl maxconn per file or url

2009-06-30 Thread Henrik Nordstrom
tis 2009-06-30 klockan 15:13 +1200 skrev Amos Jeffries:
> On Mon, 29 Jun 2009 18:50:30 -0700 (PDT), Chudy Fernandez
>  wrote:
> > I think this well help
> > 
> > acl maxcon maxconn 4
> > acl partial rep_header Content-Range .*
> > http_reply_access deny partial maxcon
> > 
> 
> I wonder
> 
> What this _does_ is cause replies to be sent back to the client with all
> range encoding and wrapping, but without the range position information or
> other critical details in the Content-Range: header.

No it doesn't. It prevents replies with Content-Range if there is more
than 4 concurrent connections from the same IP (with no regard to what
those connections is being used for).

It's http_reply_access, not http_header_access...

> This does not prevent Squid from fetching the multiple requests for ranges
> in the first place, nor save any bandwidth used by Squid doing so.

But it does cause Squid to abort the partial requests once headers have
been received.

Regards
Henrik



Re: [squid-users] Squid3 compile option. Which features are included by default?

2009-06-30 Thread Henrik Nordstrom
mån 2009-06-29 klockan 10:43 +0100 skrev Dayo Adewunmi:

> Which of these options are included by default? For example, when I install
> squid with apt, I get delay pools enabled by default. Is this the same for 
> installing squid from source?

The options enabled by default is listed with an --disable-
in ./configure --help output.

All options listed as --enable-XXX is disabled by default.

Regards
Henrik



Re: [squid-users] squid becomes very slow during peak hours

2009-06-30 Thread Henrik Nordstrom
tis 2009-06-30 klockan 05:58 -0700 skrev goody goody:
> Hi there,
> 
> I am running squid 2.5 on freebsd 7, and my squid box respond very slow 
> during peak hours. my squid machine have twin dual core processors, 4 ram and 
> following hdds.
> 
> Filesystem SizeUsed   Avail Capacity  Mounted on
> /dev/da0s1a9.7G241M8.7G 3%/
> devfs  1.0K1.0K  0B   100%/dev
> /dev/da0s1f 73G 35G 32G52%/cache1
> /dev/da0s1g 73G2.0G 65G 3%/cache2
> /dev/da0s1e 39G2.5G 33G 7%/usr
> /dev/da0s1d 58G6.4G 47G12%/var

Doesn't tell much about your drives... is this a single drive partitoned
into multiple partitions, or a RAID of some kind?

If it's a single drive then that's quite a noticeable bottleneck under
peak load... apart from the obsolete Squid version with slowish
networking you are using..

Regards
Henrik



Re: [squid-users] Question about: Changes to squid-cache.org FTP servers

2009-06-30 Thread Henrik Nordstrom
tis 2009-06-30 klockan 17:08 +0200 skrev Silamael:

> Will the paths on the HTTP-Server also change?

Some day they may, but no change planned there today.

Regards
Henrik



Re: [squid-users] R: [squid-users] cache size and structure

2009-06-30 Thread Riccardo Castellani

No, it is not. It may be 1mbps .OR. 76GB for week. it can't be anything
"per second for week". You may mean 1mbps during weekdays, 1mbps during
the whole week, 1mbps average.


I want to say if weekly average of http traffic is 1 Mbps (monitored by mrtg 
tools), all http traffic, which goes to my squid, is 76 GB in a week.


infact mrtg gives me these information:
maximum peak for day
traffic average for day
...for week
...for month

Do you understand my calculates ?


ok, are your child caches configured as neighbours to each other?


both squid  (B,C) have configured as parent cache squid A
I don't know what means "configured as neighbours to each other". Do you 
reference B as neighbours  to C ?!


- Original Message - 
From: "Matus UHLAR - fantomas" 

To: 
Sent: Tuesday, June 30, 2009 10:10 AM
Subject: Re: [squid-users] R: [squid-users] cache size and structure



>> excuse me, 1 megabit per second for a week, so :
>> 1 Mb *60*60*24*7=604.800 / 8 = 75.600 MB  about 76 GB traffic for 
>> week.
>> I'm refering both inbound and outbound traffic (1 Mbps in , 1 Mbps 
>> out).


>Well, that is just one megabit per second. That is the link bandwidth, 
>it's
>irelevant to provide different interval, you are just confusing us (or 
>at

>least me). We can count how much does that make for a day, week, month,
>year
>...


On 30.06.09 09:51, Riccardo Castellani wrote:

Matus Uhlar,


Skip private replies please! I am not subscribed to the list to receive
private squid-related mail.


my internet link bandwidth is about 10 Mbps and I'm monitoring (by mrtg)
traffic usage of my Squid parent (A), which is about 1 Mbps for a week 
(so

76 GB data traffic for a week).


No, it is not. It may be 1mbps .OR. 76GB for week. it can't be anything
"per second for week". You may mean 1mbps during weekdays, 1mbps during
the whole week, 1mbps average.


I have also 2 squid chileds (B,C) which
communicate to squid A. Some user groups use Squid A, others use Squid B 
and

others use Squid C.
1 Mbps


>so, some users access directly your squid?
yes


ok, are your child caches configured as neighbours to each other?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Windows 2000: 640 MB ought to be enough for anybody 




[squid-users] info on reverse proxy for multiple https sites

2009-06-30 Thread Mario Remy Almeida
Hi All,

Would like to know if its possible to setup reverse proxy for multiple
https with just 1 IP for squid

meaning squid will listen on 1 IP and do reverse proxy for multiple
domains with multiple certificate (certificate as per the domain)

//Remy


--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


[squid-users] Squid 3.0 STABLE16

2009-06-30 Thread Beavis
Hi,

 I'm looking for the ldap_auth option for squid 3.0. all i see on the
./configure options are the following

--enable-basic-auth-helpers= (OPTIONS: digest_auth, negotiate_auth,
basic_auth, external_acl, ntlm_auth)
--enable-auth= (OPTIONS: digest, ntlm, basic, negotiate)

If someone can point me to the right direction, I would very much
appreciate it.


-b



-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


[squid-users] Re: Squid 3.0 STABLE16

2009-06-30 Thread Beavis
sorry for the noise .. i found it..

thanks again.

On Tue, Jun 30, 2009 at 1:24 PM, Beavis wrote:
> Hi,
>
>  I'm looking for the ldap_auth option for squid 3.0. all i see on the
> ./configure options are the following
>
> --enable-basic-auth-helpers= (OPTIONS: digest_auth, negotiate_auth,
> basic_auth, external_acl, ntlm_auth)
> --enable-auth= (OPTIONS: digest, ntlm, basic, negotiate)
>
> If someone can point me to the right direction, I would very much
> appreciate it.
>
>
> -b
>
>
>
> --
> ()  ascii ribbon campaign - against html e-mail
> /\  www.asciiribbon.org   - against proprietary attachments
>



-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


Re: [squid-users] info on reverse proxy for multiple https sites

2009-06-30 Thread Chris Woodfield
If you need multiple SSL certs, you need a different IP/tcp port combo  
for each certificate.


If all your backend servers are within a single domain, a wildcard  
cert may do the trick.


-C

On Jun 30, 2009, at 3:07 PM, Mario Remy Almeida wrote:


Hi All,

Would like to know if its possible to setup reverse proxy for multiple
https with just 1 IP for squid

meaning squid will listen on 1 IP and do reverse proxy for multiple
domains with multiple certificate (certificate as per the domain)

//Remy


--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although  
none has
been found, we cannot guarantee  that it is completely free from  
such problems
and do not accept any  liability for loss or damage which may be  
caused.
Please therefore  check any attachments for viruses before using  
them on your
own  equipment. If you do find a computer virus please inform us  
immediately
so that we may take appropriate action. This communication is  
intended  solely
for the addressee and is confidential. If you are not the intended  
recipient,
any disclosure, copying, distribution or any action  taken or  
omitted to be

taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and  
may not

necessarily be that of ISA.





Re: [squid-users] squid becomes very slow during peak hours

2009-06-30 Thread Chris Robertson

goody goody wrote:

Hi there,

I am running squid 2.5 on freebsd 7,


As Adrian said, upgrade.  2.6 (and 2.7) support kqueue under FreeBSD.


 and my squid box respond very slow during peak hours. my squid machine have 
twin dual core processors, 4 ram and following hdds.

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/da0s1a9.7G241M8.7G 3%/
devfs  1.0K1.0K  0B   100%/dev
/dev/da0s1f 73G 35G 32G52%/cache1
/dev/da0s1g 73G2.0G 65G 3%/cache2
/dev/da0s1e 39G2.5G 33G 7%/usr
/dev/da0s1d 58G6.4G 47G12%/var


below are the status and settings i have done. i need further guidance to  
improve the box.

last pid: 50046;  load averages:  1.02,  1.07,  1.02up 


7+20:35:29  15:21:42
26 processes:  2 running, 24 sleeping
CPU states: 25.4% user,  0.0% nice,  1.3% system,  0.8% interrupt, 72.5% idle
Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache, 112M Buf, 3708K Free
Swap: 4096M Total, 20K Used, 4096M Free

  PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
49819 sbt1 1050   360M   351M CPU3   3  92:43 98.14% squid
  487 root1  960  4372K  2052K select 0  57:00  3.47% natd
  646 root1  960 16032K 12192K select 3  54:28  0.00% snmpd
  

SNIP

pxy# iostat
  tty da0pass0 cpu
 tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0  126 12.79   5  0.06   0.00   0  0.00   4  0  1  0 95

pxy# vmstat
 procs  memory  pagedisks faults  cpu
 r b w avmfre   flt  re  pi  pofr  sr da0 pa0   in   sy   cs us sy 
id
 1 3 0  458044 10326812   0   0   030   5   0   0  273 1721 2553  4  1 
95
  


Those statistics show wildly different utilization.  The first (top, I 
assume) shows 75% idle (or a whole CPU in use).  The next two show 95% 
idle (in effect, one CPU 20% used).  How close (in time) were the 
statistics gathered?




some lines from squid.conf
cache_mem 256 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF

cache_swap_low 80
cache_swap_high 90

cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64

cache_log /var/log/squid25/cache.log
cache_access_log /var/log/squid25/access.log
cache_store_log none

half_closed_clients off
maximum_object_size 1024 KB 
  
if anyother info required, i shall provide.
  


The types (and number) of ACLs in use would be of interest as well.


Regards,
.Goody.
  


Chris



Re: [squid-users] Question about: Changes to squid-cache.org FTP servers

2009-06-30 Thread Amos Jeffries
On Tue, 30 Jun 2009 17:08:48 +0200, Silamael 
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hello there,
> 
> Will the paths on the HTTP-Server also change?
> 

No. We are only considering a change to the snapshots location for HTTP.
But that is not happening yet.

Amos


[squid-users] Strange problem with sibling squids in accelerator mode

2009-06-30 Thread Lu, Roy
Hi,

I encountered a strange problem in using sibling squids as accelerators.
I have two accelerator squids, A and B (on two different boxes). They
are set up as sibling cache peers which both point to the same parent
cache_peer origin content server. I used the following commands to run
my test:

1. Load an object into A:

%squidclient -h host.name.of.A URL

2. Purge the object from B:

%squidclient -h host.name.of.B -m PURGE URL

3. Double check to make sure A has the object and B does not:

%squidclient -h host.name.of.A -m HEAD -H "Cache-Control:
only-if-cached\n" URL
Resulted in TCP_MEM_HIT

%squidclient -h host.name.of.B -m HEAD -H "Cache-Control:
only-if-cached\n" URL
Resulted in TCP_MISS

4. Request the object from B:

%squidclient -h host.name.of.B URL

Now the strange problem comes in. If I run the last step on the box A,
the ICP communication occurs, and in A's log I see UDP_HIT and
TCP_MEM_HIT, and in B's log I see TCP_MISS and SIBLING_HIT. However, if
I run the last step in B, then there is no ICP communication, squid B
simply goes to the parent origin server to get the object (in B's log
FIRST_UP_PARENT and nothing in A's log). When I run the same test with
squidclient on a third machine, the result is negative too. So it seems
that only if I run the squidclient utility on the same box where the
cached object is, then its sibling cache will retrieve the object from
this box.

The configuration for Squid A is:

#===

# ACL changes
#===

# acl for purge method
acl acl_purge method purge

# acl for origin app server
acl acl_gpl_app_servers dstdomain vmprodcagpcna04.firstamdata.net

# acl for cache peer squid server
acl acl_gpl_cache_sibling src host.name.of.A

#===

# http_access changes
#===

# allow purge method from localhost or sibling
http_access allow acl_purge localhost
http_access allow acl_purge acl_gpl_cache_sibling
http_access deny acl_purge

# allow http access to app servers and from cache sibling
http_access allow acl_gpl_app_servers
http_access allow acl_gpl_cache_sibling

#===

# icp_access changes
#===

# allow icp queries from cache sibling
icp_access allow acl_gpl_cache_sibling

#===

# cache_peer changes
#===

cache_peer vmprodcagpcna04.firstamdata.net parent 7533 0 no-query
originserver name=cp_gpl_app_servers
cache_peer host.name.of.A sibling 3128 3130 name=cp_gpl_cache_sibling
proxy-only

#===

# cache_peer_access changes
#===

# Allow peer connection to the origin app server and sibling cache peer
cache_peer_access cp_gpl_app_servers allow acl_gpl_app_servers
cache_peer_access cp_gpl_cache_sibling allow acl_gpl_cache_sibling


Configuration for B is almost identical except the host.name.of.A in the
acl and cache_peer tags is switched with B's. 

Can someone point out what might be the problem here?

Thanks.
Roy
**
 
This message may contain confidential or proprietary information intended only 
for the use of the 
addressee(s) named above or may contain information that is legally privileged. 
If you are 
not the intended addressee, or the person responsible for delivering it to the 
intended addressee, 
you are hereby notified that reading, disseminating, distributing or copying 
this message is strictly 
prohibited. If you have received this message by mistake, please immediately 
notify us by  
replying to the message and delete the original message and any copies 
immediately thereafter. 

Thank you. 
**
 
FACLD



Re: [squid-users] acl maxconn per file or url

2009-06-30 Thread Amos Jeffries
On Tue, 30 Jun 2009 18:35:09 +0200, Henrik Nordstrom
 wrote:
> tis 2009-06-30 klockan 15:13 +1200 skrev Amos Jeffries:
>> On Mon, 29 Jun 2009 18:50:30 -0700 (PDT), Chudy Fernandez
>>  wrote:
>> > I think this well help
>> > 
>> > acl maxcon maxconn 4
>> > acl partial rep_header Content-Range .*
>> > http_reply_access deny partial maxcon
>> > 
>> 
>> I wonder
>> 
>> What this _does_ is cause replies to be sent back to the client with all
>> range encoding and wrapping, but without the range position information
>> or
>> other critical details in the Content-Range: header.
> 
> No it doesn't. It prevents replies with Content-Range if there is more
> than 4 concurrent connections from the same IP (with no regard to what
> those connections is being used for).
> 
> It's http_reply_access, not http_header_access...
> 

Doh! Thanks Henrik.

>> This does not prevent Squid from fetching the multiple requests for
>> ranges
>> in the first place, nor save any bandwidth used by Squid doing so.
> 
> But it does cause Squid to abort the partial requests once headers have
> been received.
> 
> Regards
> Henrik

Amos


RE: [squid-users] Strange problem with sibling squids in accelerator mode

2009-06-30 Thread Lu, Roy
I am using version 3.0 stable 16.

-Original Message-
From: Lu, Roy [mailto:r...@facorelogic.com] 
Sent: Tuesday, June 30, 2009 5:24 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Strange problem with sibling squids in
accelerator mode

Hi,

I encountered a strange problem in using sibling squids as accelerators.
I have two accelerator squids, A and B (on two different boxes). They
are set up as sibling cache peers which both point to the same parent
cache_peer origin content server. I used the following commands to run
my test:

1. Load an object into A:

%squidclient -h host.name.of.A URL

2. Purge the object from B:

%squidclient -h host.name.of.B -m PURGE URL

3. Double check to make sure A has the object and B does not:

%squidclient -h host.name.of.A -m HEAD -H "Cache-Control:
only-if-cached\n" URL
Resulted in TCP_MEM_HIT

%squidclient -h host.name.of.B -m HEAD -H "Cache-Control:
only-if-cached\n" URL
Resulted in TCP_MISS

4. Request the object from B:

%squidclient -h host.name.of.B URL

Now the strange problem comes in. If I run the last step on the box A,
the ICP communication occurs, and in A's log I see UDP_HIT and
TCP_MEM_HIT, and in B's log I see TCP_MISS and SIBLING_HIT. However, if
I run the last step in B, then there is no ICP communication, squid B
simply goes to the parent origin server to get the object (in B's log
FIRST_UP_PARENT and nothing in A's log). When I run the same test with
squidclient on a third machine, the result is negative too. So it seems
that only if I run the squidclient utility on the same box where the
cached object is, then its sibling cache will retrieve the object from
this box.

The configuration for Squid A is:

#===

# ACL changes
#===

# acl for purge method
acl acl_purge method purge

# acl for origin app server
acl acl_gpl_app_servers dstdomain vmprodcagpcna04.firstamdata.net

# acl for cache peer squid server
acl acl_gpl_cache_sibling src host.name.of.A

#===

# http_access changes
#===

# allow purge method from localhost or sibling
http_access allow acl_purge localhost
http_access allow acl_purge acl_gpl_cache_sibling
http_access deny acl_purge

# allow http access to app servers and from cache sibling
http_access allow acl_gpl_app_servers
http_access allow acl_gpl_cache_sibling

#===

# icp_access changes
#===

# allow icp queries from cache sibling
icp_access allow acl_gpl_cache_sibling

#===

# cache_peer changes
#===

cache_peer vmprodcagpcna04.firstamdata.net parent 7533 0 no-query
originserver name=cp_gpl_app_servers
cache_peer host.name.of.A sibling 3128 3130 name=cp_gpl_cache_sibling
proxy-only

#===

# cache_peer_access changes
#===

# Allow peer connection to the origin app server and sibling cache peer
cache_peer_access cp_gpl_app_servers allow acl_gpl_app_servers
cache_peer_access cp_gpl_cache_sibling allow acl_gpl_cache_sibling


Configuration for B is almost identical except the host.name.of.A in the
acl and cache_peer tags is switched with B's. 

Can someone point out what might be the problem here?

Thanks.
Roy

** 
This message may contain confidential or proprietary information
intended only for the use of the 
addressee(s) named above or may contain information that is legally
privileged. If you are 
not the intended addressee, or the person responsible for delivering it
to the intended addressee, 
you are hereby notified that reading, disseminating, distributing or
copying this message is strictly 
prohibited. If you have received this message by mistake, please
immediately notify us by  
replying to the message and delete the original message and any copies
immediately thereafter. 

Thank you. 

** 
FACLD



Re: [squid-users] acl maxconn per file or url

2009-06-30 Thread Chudy Fernandez

Theoretically suppose to work. although it limits 4 connection but the download 
doesn't finish.
I wonder how site's replied that gave 4 connections when I'm requesting 16.
http code 416? or 408...



- Original Message 
From: Amos Jeffries 
To: Henrik Nordstrom 
Cc: Chudy Fernandez ; squid-users@squid-cache.org
Sent: Wednesday, July 1, 2009 8:25:52 AM
Subject: Re: [squid-users] acl maxconn per file or url

On Tue, 30 Jun 2009 18:35:09 +0200, Henrik Nordstrom
 wrote:
> tis 2009-06-30 klockan 15:13 +1200 skrev Amos Jeffries:
>> On Mon, 29 Jun 2009 18:50:30 -0700 (PDT), Chudy Fernandez
>>  wrote:
>> > I think this well help
>> > 
>> > acl maxcon maxconn 4
>> > acl partial rep_header Content-Range .*
>> > http_reply_access deny partial maxcon
>> > 
>> 
>> I wonder
>> 
>> What this _does_ is cause replies to be sent back to the client with all
>> range encoding and wrapping, but without the range position information
>> or
>> other critical details in the Content-Range: header.
> 
> No it doesn't. It prevents replies with Content-Range if there is more
> than 4 concurrent connections from the same IP (with no regard to what
> those connections is being used for).
> 
> It's http_reply_access, not http_header_access...
> 

Doh! Thanks Henrik.

>> This does not prevent Squid from fetching the multiple requests for
>> ranges
>> in the first place, nor save any bandwidth used by Squid doing so.
> 
> But it does cause Squid to abort the partial requests once headers have
> been received.
> 
> Regards
> Henrik

Amos



  


[squid-users] X-Cache regex need some help

2009-06-30 Thread Chudy Fernandez

header:
X-Cache HIT from Server

the following doesn't work
acl hit rep_header X-Cache HIT\ from\ Server
or
acl hit rep_header X-Cache HIT.from.Server
or even
acl hit rep_header X-Cache HIT.*Server
it only match for
acl hit rep_header X-cache HIT

I'm using this for 
log_access deny hit

I'm wondering "from Server" is some kind of code?



  


[squid-users] squid-3.1.0.9 - error directory not created automatically

2009-06-30 Thread Zeller, Jan
dear list,

I compiled squid-3.1.0.9 like this :

$ squid -v
Squid Cache: Version 3.1.0.9
configure options:  '--prefix=/opt/squid-3.1.0.9' '--enable-icap-client' 
'--enable-ssl' '--enable-linux-netfilter' '--enable-http-violations' 
'--with-filedescriptors=32768' '--with-pthreads' '--disable-ipv6' 
--with-squid=/usr/local/src/squid-3.1.0.9 --enable-ltdl-convenience

Unfortunately there is no 'error' directory created !? Why ? squid-3.1.0.7 
created this directory automatically.
Should I explicitly download the language pack from 
http://www.squid-cache.org/Versions/langpack/ ?

kind regards,

Mit freundlichen Grüssen
---
Jan Zeller
Informatikdienste 
Universität Bern