[squid-users] Re: kid2| WARNING: disk-cache maximum object size is unlimited but mem-cache maximum object size is 32.00 KB

2013-10-27 Thread Linda Walsh

Noticed a few items I would try to simplify (these are suggestions
from my own experience...)...

1) for storio, don't use "rock" -- it limits items on disk to 32k and on 
a 486 you aren't

going to want what it was designed for (multi-core systems).



2) get rid of things you don't need i.e., are you really using delay pools?
cache-digests? icap?, esi?? Are you only trying to run it as
a "transparent proxy"  (enable-linux-netfilter)?

if you have nothing working... try tossing the auth and acl helpers on your
first test version...  Once you get something working, add your needed 
features

back in.

Given that it's your first & only squid proxy, I doubt you need cache 
digests in
your test version, for example (I believe it is used for sharing content 
between

squid servers... )...   simplify until you get something working...




On 10/25/2013 1:58 PM, Ahmad wrote:

hi  i compiled squid 3.3.8 with below :

Squid Cache: Version 3.3.8
configure options:  '--build=i486-linux-gnu' '--prefix=/usr'
'--includedir=/include' '--mandir=/share/man' '--infodir=/share/info'
'--sysconfdir=/etc' '--enable-cachemgr-hostname=drx' '--localstatedir=/var'
'--libexecdir=/lib/squid' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules' '--srcdir=.'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--disable-translation'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-filedescriptors=131072' '--with-large-files'
'--with-default-user=squid' '--enable-linux-netfilter'
'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2' 'LDFLAGS='
'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g 
-Wall -O2' --enable-ltdl-convenience



i follwed example exist in :
http://wiki.squid-cache.org/Features/SmpScale 


the problem is that there is warning occurs and then squid get down ,
here is it !!
*kid2| WARNING: disk-cache maximum object size is unlimited but mem-cache
maximum object size is 32.00 KB*

logs say that squid is down , but i can access squid by port , but as i
think there is no caching occuring in hardsiks !!! tr has fixed sizeand all
logs are tcp miss  not tcp hit !!!

i googled alot but no benefit

here is log file when i start squid :
[root@DataBase ~]# tailf /var/log/squid/backend.cache.log 
2013/10/25 22:47:39 kid2| Logfile: closing log

stdio:/var/log/squid/backend.access.log
2013/10/25 22:47:39 kid4| Open FD UNSTARTED 8 DNS Socket IPv6
2013/10/25 22:47:39 kid4| Open FD UNSTARTED 9 DNS Socket IPv4
2013/10/25 22:47:39 kid4| Open FD READING  13 
2013/10/25 22:47:39 kid2| Open FD UNSTARTED 8 DNS Socket IPv6

2013/10/25 22:47:39 kid2| Open FD UNSTARTED 9 DNS Socket IPv4
2013/10/25 22:47:39 kid2| Open FD READING  13 
2013/10/25 22:47:39 kid3| Squid Cache (Version 3.3.8): Exiting normally.

2013/10/25 22:47:39 kid2| Squid Cache (Version 3.3.8): Exiting normally.
2013/10/25 22:47:39 kid4| Squid Cache (Version 3.3.8): Exiting normally.



2013/10/25 22:50:24 kid4| Preparing for shutdown after 0 requests
2013/10/25 22:50:24 kid4| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid4| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid4| Shutdown: Negotiate authentication.
2013/10/25 22:50:24 kid4| Shutdown: Digest authentication.
2013/10/25 22:50:24 kid4| Shutdown: Basic authentication.
2013/10/25 22:50:24 kid2| Preparing for shutdown after 0 requests
2013/10/25 22:50:24 kid3| Preparing for shutdown after 0 requests
2013/10/25 22:50:24 kid2| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid3| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid2| Closing HTTP port 127.0.0.1:4002
2013/10/25 22:50:24 kid2| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid3| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid2| Closing HTTP port 127.0.0.1:4002
2013/10/25 22:50:24 kid3| Closing HTTP port 127.0.0.1:4003
2013/10/25 22:50:24 kid2| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid3| Closing HTTP port 127.0.0.1:4003
2013/10/25 22:50:24 kid2| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid2| Shutdown: Negotiate authentication.
2013/10/25 22:50:24 kid3| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid3| Shutdown: Negotiate authentication.
2013/10/25 22:50:24 kid2|

[squid-users] Re: Optimized Configuration?

2013-06-27 Thread Linda Walsh

Eliezer Croitoru wrote:

The basics...
Good hardware..
If you have good hardware there is nothing much you need to tune.
How many users?

anyone had the chance to do a MIPS vs INTEL GB lan card?


More pertinent, today would be a 10GB comparison.

My systems regularly pegged a 1GB Intel card in sys-sys file copies
(125MB/s writes, 119MB/s reads)...

Best speeds I've gotten off of 2x10GB-has been in the 700MB range,
though typical is more likely in the 200-400MB range.  HW/SW -- through
file server to null files tops out due to HW on both ends
(both Xeon Nehalams, one 12core@2.6 other 6core at 3.4).

(Intel cards between them direct connect.


[squid-users] what is lib.a in src/repl?

2012-05-29 Thread Linda Walsh
I just "upgraded(?)" some libraries / rpm from my 11.4->12.1 open suse 
release and

find I am no longer able to build squid --

I'm getting a weird error that indicates "something" is missing, but it is
rather generic...

The other libs in the dir are "libheap" and "lru"...

Looks like my _repl_acement policies... I was using mem: heap GDSF
and cache replacement heap LFUDA

Don't see how LFUDA or GDSF tie to the heap policy... or why either would
map to a generic name like lib.a?

I did re-run 'configure'...hmmm...maybe I need to rebootstrap?

weird...(3.2.0.17)...



libtool: link: /usr/bin/ar cru .libs/libfs.a .libs/Module.o
libtool: link: ranlib .libs/libfs.a
libtool: link: ( cd ".libs" && rm -f "libfs.la" && ln -s "../libfs.la" 
"libfs.la" )

make[3]: Leaving directory `/home/tools/squid/squid-3.2.0.17/src/fs'
Making all in 
repl  
<-

make[3]: Entering directory `/home/tools/squid/squid-3.2.0.17/src/repl'
make[3]: *** No rule to make target `lib.a', needed by `all-am'.  
Stop.   <-

make[3]: Leaving directory `/home/tools/squid/squid-3.2.0.17/src/repl'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/home/tools/squid/squid-3.2.0.17/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/tools/squid/squid-3.2.0.17/src'
make: *** [all-recursive] Error 1






[squid-users] Re: Correctoions (was TCP_SWAPFAIL/200)

2012-04-19 Thread Linda Walsh

Amos Jeffries wrote:


On 18.04.2012 12:46, Linda Walsh wrote:




It appears the local disk-store isn't growing over time -- so I'm
assuming it it telling
me the on-disk store isn't working right?


Yes.





Please prioritise the core dump investigation.
Please use gdb and find out what the crash is coming from. The crash and 
core-dump could be what is behind those incomplete or truncated responses.



At this point I suggest updating to 3.2.0.17. There are a bunch of cache 
related fixes in that release. The new cache swap.state format will 
rebuild your cache_dir meta data from scratch and discard anything which 
has problems visible.



---
I prioritized upgrading to the latest release and will go from there
(no need waiting time in things that may have been fixed).




If the core dumps continue with the new release, please prioritise 
those. Most of the rest of what you describe may be side effects of the 
crashing.



---
Will do...



http_access allow CONNECT Safe_Ports


NOTE: Dangerous. Safe_Ports includes port 1024-65535 and other ports 
unsafe to permit CONNECT to. This could trivially be used as a 
multi-stage spam proxy or worse.
  ie a trivial DoS of "CONNECT localhost:8080 HTTP/1.1\n\n" results in 
CONNECT loop until your machines port are all used up.



Good point, Just wanted to allow the general case of SSL/non-SSL over any of the
ports.  Just tryig to get things working at this point... though have had his 
config for soem time and no probs -- only connector is on my side and 'me', so

I shouldn't deny myself my own service unless I try!  ;-)





hierarchy_stoplist cgi-bin ?


You can drop hierarchy_stoplist from your config for simplicity.


---
check (some of these are carry-overs from prev configs or designed should
I use it for a different config).



cache_mem   8 GB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir aufs /var/cache/squid 65535 64 64


You have multiple workers configured. AUFS does not support SMP at this 
time. That could be the problem you have with SWAPFAIL, as the workers 
collide altering the cache contents.


---
Wah?   .. but but...how do I make use of SMP with AUFS?

If I go with uniq cache dirs that's very sub-optimal -- since I end up
with 12 separate cache areas, no?  when I want to fetch something from
the catch is there coordination about what content is in which worker's cache
that will automatically invoke the correct worker?   -- If so, that's cool,
but if not, then I'll reduce my hit rate by 1/N-cpus





To use this cache either wrap it in "if ${process_number} = N" tests for 
the workers you want to do caching. Or add ${process_number} to the path 
for each worker to get its own unique directory area.


eg:
 cache_dir aufs /var/cache/squid_${process_number} 65535 64 64

or
if ${process_number} = 1
 cache_dir aufs /var/cache/squid 65535 64 64
endif





--- As said above, how do I get multi-benefit with asynchronous writes
and multi core?







url_rewrite_host_header off
url_rewrite_access deny all
url_rewrite_bypass on


You do not have any re-writer or redirector configured. These 
url_rewrite_* can all go.


-
Is it harmful (it was for future 'expansion plans' -- no
rewriters yet, but was planning...)





refresh_pattern -i (/cgi-bin/|\?) 0 0%  0


This above pattern ...




 above what pattern?




refresh_pattern -i \.(ico|gif|jpg|png)   0 20%   4320
ignore-no-cache ignore-private override-expire
refresh_pattern -i ^http:   0 20%   4320ignore-no-cache 
ignore-private


"private" means the contents MUST NOT be served to multiple clients. 
Since you say this is a personal proxy just for you, thats okay but be 
carefulif you ever open it for use by other people. Things like your 
personal details embeded in same pages are cached by this.



Got it... I should add a comment in that area to that effect


That might be a enhancement -- like -
ignore-private-same-client




"no-cache" *actually* just means check for updates before using the 
cached version. This is usually not as useful as many tutorials make it 
out to be.


---
Well, dang tutorials -- I'm screwed if I follow, and if I don't! ;-)







refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440


 ... is meant to be here (second to last).


refresh_pattern .   0   20% 4320
read_ahead_gap 256 MB


Uhm... 256 MB buffering per request sure you want to do that?



I **think*** so... doesn't that mean it will buffer up to 256MB
of a request before my client is read for it?

I think of the common case where I am saving a file and it takes me
a while to find the dir to save to.  I tweaked a few params in this area,
and it went from having to wai

[squid-users] Re: Is Adobe Connect Pro possible through Squid?

2012-04-17 Thread Linda Walsh

Linda Walsh wrote:


Peter Olsson wrote:


Hello!

Squid 3.1.19.

Users behind squid can't connect to Adobe Connect Pro,
probably because of RTMP. I have port 1935 in SSL_ports
and 1025-65535 in Safe_ports.

Is there anyone who had success connecting with
Adobe Connect Pro through squid?

Thanks!






I can connect to their test site... through my squid...
http://na1cps.adobeconnect.com/common/help/en/support/meeting_test.htm

I got "connection is LAN speed" through squid, (over a 16/2.5 Cable modem)...



[squid-users] Re: Is Adobe Connect Pro possible through Squid?

2012-04-17 Thread Linda Walsh

Peter Olsson wrote:


Hello!

Squid 3.1.19.

Users behind squid can't connect to Adobe Connect Pro,
probably because of RTMP. I have port 1935 in SSL_ports
and 1025-65535 in Safe_ports.

Is there anyone who had success connecting with
Adobe Connect Pro through squid?

Thanks!



From their requirements page, it doesn't seem like it would
preclude a proxy...

But you might have to use a transparent mode proxy for it to work...
i.e. port forwarding...

Is there somewhere in the client where you can specify a proxy port?

If so, I'd bet RTMP would work through a CONNECT session, but if there
is no place to plug in a proxy -- then it's probably trying to go direct.

Have you run a wireshark trace to see if it is trying to get through your
proxy or trying to go direct?

wireshark is very useful to get to the bottom of these things and you don't
have to be a network expert to use it... you'll see if it is talking to your
proxy port or if an HTTP request is trying to go direct to the net (not through
a proxy)...



Says, port requirements:

1935 (RTMP), 80 or other HTTP port, 443 if SSL is enabled, 25 for SMTP 
(optional), 1433 for external database (optional)


I've found SNAT or masquerading to be a requirement with some apps...then you
have to config squid as a transparent proxy (which i've never done, so don't 
know how hard that is)...I'm sure someone who has done it would say it is not

hard, but once you know how to do something, it's rarely 'hard'...;-)




[squid-users] TCP_SWAPFAIL/200

2012-04-17 Thread Linda Walsh
I recently (well a month or so ago) tried to upgrade squid after my old 
version got overwritten
by an OS related upgrade. 

now I am seeing TCP_SWAPFAIL/200 messages in my log -- that doesn't 
sound good.


Why would I be getting such?

It appears the local disk-store isn't growing over time -- so I'm 
assuming it it telling

me the on-disk store isn't working right?

I used a similar config to my previous one (below), so I'm not sure why 
it would be
croaking now...  Is there something "illegal" bout my config? 
I also included my non-comment squid.conf lines following that just to be

thorough
i'd really like to get squid back to being 100% 
solid-bullet-proof...which it

isn't right now (have had truncated downloads on longer downloads)...

I'm also getting occasional core dumps in the base of the cache dir, 
which is
usually a bad sign... ;-|  Haven't had a chance to try to check the 
stack trace yet,

but was wondering if anything looked amiss with my swap setup.


It's a 12core 48G machine, with a reasonably fast Raid so it should
have plenty of horse power for 1 user...but I find it can't keep up with
my browsing habits... which is insane considering it's usually used
for 10's-100's of users w/no prob...I know I am not that fast..


Any pointing out of "gotcha's" would be appreciated!...



squid -v
Squid Cache: Version 3.2.0.16
configure options:  'CFLAGS=-g -m64 -O2 -march=native -pipe -D_REENTRENT
'CCFLAGS=-g -m64 -O2 -march=native -pipe -D_REENTRENT 'LDFLAGS= -s'
'--prefix=/usr' '--bindir=/usr/sbin' '--datadir=/usr/share/squid'
'--libexecdir=/usr/sbin' '--libdir=/usr/lib64'
'--localstatedir=/var/cache/squid' '--sharedstatedir=/var/lib/squid'
'--sysconfdir=/etc/squid' '--docdir=/usr/share/packages/doc/squid'
'--with-aufs-threads=24' '--with-logdir=/var/log/squid'
'--with-mandir=/usr/share/man' '--with-piddir=/var/run/squid/squid.pid'
'--with-default-user=squid' '--with-gnu-ld' '--with-included-ltdl'
'--with-pic' '--with-large '--with-ltdl-lib=/usr/lib64'
'--enable-build-info' '--enable-cachemgr-hostname' '--enable-disk-io'
'--disable-ecap' '--disable-icap-client' '--enable-kill-parent-hack'
'--enable-linux-netfilter' '--enable-ltld-install' '--enable-referer-log'
'--enable-removal-policies' '--enable-stacktraces' '--enable-storeio'
'--enable-useragent-log' '--enable-zph-qos' '--enable-x-accelerator-vary'
'--disable-xmalloc-statistics' '--disable-auto-locale' '--disable-htcp'
'--disable-ident-lookups' '--disable-ipv6' '--disable-snmp'
'--disable-translation' '--without-netfilter-conntrack'
'EXT_LIBECAP_CFLAGS=-lecap' 'EXT_LIBECAP_LIBS=/usr/lib/libecap.so.2'

+ a bunch of compiler optimization switches: (that I also mostly used,
 though gcc is a newer version and a few options might be different)

-fpie -fmessage-length=0 -funwind-tables -fasynchronous-unwind-tables
-fbranch-target-load-optimize -fira-loop-pressure -fgcse -fgcse-las
-fgcse-lm -fgcse-sm -floop-interchange -floop-strip-mine -floop-block 
-flto -fpredictive-commoning -frename-registers -ftree-loop-linear 
-ftracer -ftree-loop-distribution -ftree-loop-im -ftree-loop-ivcanon 
-fivopts -ftree-vectorize -funswitch-loops 
-fvariable-expansion-in-unroller -freorder-blocks-and-partition -fweb'




Non-comment squid conf lines :

acl sc_subnet src 192.168.3.0/24
acl localnet src 192.168.3.0/24 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 81  # http
acl Safe_ports port 82  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1024-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Allowed_Connect port 1024-65535 #allowed non-SSL Connects to 
non-reserved ports

acl CONNECT method CONNECT
http_access allow manager localhost
http_access allow manager sc_subnet
http_access deny manager
http_access deny !Safe_ports
http_access allow CONNECT Safe_Ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 192.168.3.1:8080
hierarchy_stoplist cgi-bin ?
cache_mem   8 GB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir aufs /var/cache/squid 65535 64 64
maximum_object_size 1 GB
cache_swap_low 93
cache_store_log /var/log/squid/store.log
pid_filename /var/run/squid/squid.pid
strip_query_terms off
buffered_logs on
cache_log daemon:/var/log/squid/cache.log
coredump_dir /var/cache/squid
url_rewrite_host_header off
url_rewrite_access deny all
url_rewrite_bypass on
refresh_pattern

[squid-users] FWD: squid 3.2.0.16 access log no longer strictly increase...? *ouch* -- bug or feature?

2012-03-27 Thread Linda Walsh
repost -- mailer bounded it -- for some reason it thinks that having multiple 
formats of a message to choose from is a security problem?  Gee...

good think squid doesn't reject things with type-info, or it wouldn't work at 
all.

Guess things that reject all types are becoming the dinosaurs of the net.


To: squid-users@squid-cache.org
Subject: squid 3.2.0.16 access log no longer strictly  increase...?  *ouch* -- 
bug or feature?

Content-Type: multipart/alternative;
 boundary="090103050306030004050800"

This is a multi-part message in MIME format.
--090103050306030004050800
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


Got a surprise in a new version of squid 3.2.0.16 -- in monitoring my
log, my monitor prog burped and died.  It didn't like trying to
calculate the average rate over negative time period (for some reason it
doesn't realize that time
can run backwards and data is actually sucked back out...;-))

Is this normal now?  One of the things I changed recently was going from
syslog, which tends to be pretty good about not having times go backwards,
to 'diskd'

Dunno if it is a bug or a feature, but it is a bit odd looking in a
time-progression based log.  I can at least prevent my script from gagging
at such, but is it supposed to be doing that??

Thanks -- sample times included below from log  -- ALL of these were
while I was connected to 'google', so ALL of them were 'CONNECT' log
messages, which might enter into the equation somewhere...
-linda
(** indicates time going backwards)

1332885640.662
1332885641.442
1332885641.431**
1332885641.436
1332885644.023
1332885663.344**
1332885663.515
...
1332885668.461
1332885670.637
1332885671.136
1332885672.015
1332885671.807**
1332885672.527
1332885672.325**
-

p.s. -- OTOH, it could be a marketing feature... squid's so fast, it
fetches content in negative time!




--090103050306030004050800
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit




  
  
/* Linda's Style playground (c) 2011 L. A Walsh (permission given to
   do w/this anything other than claim my original as your own!
-- href="mailto:dept.playgrou...@tlinx.org";>dept.playgrou...@tlinx.org )
*/
/* margin:(X):=T+B+R+L; (V H):V=T+B,H=R+L; (T H B):T,H=R+L, (T R B L) */

html, body {
font: 12pt "Lucida Console", monospace, fixed;
font-size-adjust:.50;
background-color:#f8fefb; color:#104060;
max-width:90ex;
}

table, tbody, tr, td {font: inherit;font-size 11.4pt; }

p { margin: 1em; text-indent:1em }
p+p { margin-top: .75em;margin-bottom:.75em }

small { font-size:85.18% }
big { font-size:117.4% }

quote { font-style:italic;
.l quote { font-style:italic; font-family:cursive,sans-serif;}

em { font-variant:small-caps }
h6 { font-size:85.180%/117.398% }
h5 { font-size:100%/132.824% }
h4 { font-size:117.398%/161.803% }
h3 { font-size:132.824%/200.00% }
h2 { font-size:161.803%/234.797% }
h1 { font-size:200.000%/265.648% }

h1, h2, h3, h4, h5 {font-size: inherit; font-weight:bold}

h5, h6 {font-size: inherit; font-variant:small-caps;}
hr {font-family:monospace:fixed; width:90ex; margin:0;left}
h5 {font: inherit; font-weight:800 }
h6 { font: inherit; font-weight:700 }
h1,h2,h3,h4,h5,h6 { margin:1em }

blockquote { margin:1em 1em; font-style:italic; }
blockquote > p, blockquote > blockquote {
 margin-top:0.50em;margin-bottom:0.50em; text-indent:0;}

pre, cite { margin: 1.2em .8em; }
pre, cite, tt {font-style:oblique; background-color:#f6f6f0; color:#202040;
font-family:"Lucida Console", monospace;
}
pre+pre {font-inherit; font-style:oblique;
background-color:#f6f6f0; color:#202040; margin:1ex .8em }

address {font inherit; font-style:oblique; font-family:"Cambria";}
address {font:inherit; margin:1em 3em; background-color:#f8faff;}
address+address {margin:0 2em}

img { margin:1.6em }

q {quotes:"“" "”" "‘" "’" }
q:before { content:open-quote }
q:after { content:close-quote }

a, a:link, a:focus, a:visited {text-decoration:underline}
a:link { color:#44BB33 }
a:focus { color: #22FF11 }
a:visited { color: #557722 }

.sig { font: oblique 15.75pt/84pt "Lucida Handwriting",cursive }
.sig:first-letter {
float:left;
font: italic 56pt/84pt "Lucida Calligraphy",cursive;
font-weight:200;
}

#sig_fl {
float:left;font:italic 56pt/84pt "Lucida Calligraphy",cursive;
font-weight:200;
}

@font-face {font-family:Verdana; panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face {font-family:Cambria; panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face {font-family:"Lucida Calligraphy"; panose-1:3 1 1 1 1 1 1 1 1 1;}
@font-face {font-family:"Lucida Handwriting"; panose-1:3 1 1 1 1 1 1 1 1 1;}

.MsoNormal, .MsoNormalTab {
padding:0; margin:0; color:"darkmagenta"; background:"honeydew";
font: oblique 100%/100% "Calibri","Verdana","Arial" !important;
}
span.

[squid-users] Requested stack backtrace....from recent factor coredumping squid...





`

 Original Message 
Received: 	(from squid@localhost) by Ishtar.tlinx.org 
(8.14.4/8.14.3/Submit) id p6LIX7KN009953 for squid_maintai...@tlinx.org; 
Thu, 21 Jul 2011 11:33:07 -0700

From:   WWW proxy squid 
Date:   Thu, 21 Jul 2011 11:33:07 -0700
To: squid_maintai...@tlinx.org



From: squid@"web-proxy"
To: squid_maintai...@tlinx.org
Subject: The Squid Cache (version 3.HEAD-BZR) died.

You've encountered a fatal error in the Squid Cache version 3.HEAD-BZR.
If a core file was created (possibly in the swap directory),
please execute 'gdb squid core' or 'dbx squid core', then type 'where',
and report the trace back to squid-b...@squid-cache.org.

Thanks!

Well fine!
---
squid -v reports:
Squid Cache: Version 3.HEAD-BZR
configure options:  '--enable-disk-io' '--enable-async-io=48' \
'--enable-storeio' '--enable-removal-policies' '--disable-htcp' \
'--enable-ssl' '--disable-ident-lookups' '--enable-external-acl-helpers'\
'--with-dl' '--with-large-files' '--prefix=/usr' '--sysconfdir=/etc/squid'\
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/sbin' \
'--datadir=/usr/share/squid' '--libdir=/usr/lib64' '--localstatedir=/var' \
'--with-default-user=squid' '--enable-icap-client' '--enable-referer-log' \
'--disable-wccp' '--disable-wccpv2' '--disable-snmp' \
'--enable-cachemgr-hostname' '--disable-eui' '--enable-delay-pools' \
'--enable-useragent-log' '--enable-zph-qos' '--enable-linux-netfilter' \
'--disable-translation' '--with-aufs-threads=32' \
'--disable-strict-error-checking' 'CFLAGS=-fgcse-after-reload \
-fpredictive-commoning -frename-registers -ftracer \
-fbranch-target-load-optimize -fbranch-target-load-optimize2 \
-march=native'
---
GDB output...


Ishtar:home/tools/squid/trunk-repo/work# gdb /usr/sbin/squid core
GNU gdb (GDB) SUSE (7.2-3.3)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
For bug reporting instructions, please see:
...
Reading symbols from /usr/sbin/squid...done.
[New Thread 19860]
[New Thread 19866]
[New Thread 19864]
[New Thread 19868]
[New Thread 19865]
[New Thread 19887]
[New Thread 19888]
[New Thread 19867]
[New Thread 19889]
[New Thread 19890]
[New Thread 19891]
[New Thread 19892]
[New Thread 19893]
[New Thread 19894]
[New Thread 19895]
[New Thread 19869]
[New Thread 19870]
[New Thread 19871]
[New Thread 19872]
[New Thread 19873]
[New Thread 19874]
[New Thread 19875]
[New Thread 19876]
[New Thread 19877]
[New Thread 19878]
[New Thread 19879]
[New Thread 19880]
[New Thread 19881]
[New Thread 19882]
[New Thread 19883]
[New Thread 19884]
[New Thread 19885]
[New Thread 19886]
Missing separate debuginfo for 
Try: zypper install -C "debuginfo(build-id)=25a556a294ad301b9f7e958de79c9df1529ff895"

Reading symbols from /lib64/librt-2.11.3.so...Reading symbols from 
/usr/lib/debug/lib64/librt-2.11.3.so.debug...done.
done.
Loaded symbols for /lib64/librt-2.11.3.so
Reading symbols from /lib64/libpthread-2.11.3.so...Reading symbols from 
/usr/lib/debug/lib64/libpthread-2.11.3.so.debug...done.
done.
Loaded symbols for /lib64/libpthread-2.11.3.so
Reading symbols from /lib64/libcrypt-2.11.3.so...Reading symbols from 
/usr/lib/debug/lib64/libcrypt-2.11.3.so.debug...done.
done.
Loaded symbols for /lib64/libcrypt-2.11.3.so
Reading symbols from /lib64/libssl.so.1.0.0...Reading symbols from 
/usr/lib/debug/lib64/libssl.so.1.0.0.debug...done.
done.
Loaded symbols for /lib64/libssl.so.1.0.0
Reading symbols from /lib64/libcrypto.so.1.0.0...Reading symbols from 
/usr/lib/debug/lib64/libcrypto.so.1.0.0.debug...done.
done.
Loaded symbols for /lib64/libcrypto.so.1.0.0
Reading symbols from /usr/lib64/libgssapi_krb5.so.2.2...Reading symbols from 
/usr/lib/debug/usr/lib64/libgssapi_krb5.so.2.2.debug...done.
done.
Loaded symbols for /usr/lib64/libgssapi_krb5.so.2.2
Reading symbols from /usr/lib64/libkrb5.so.3.3...Reading symbols from 
/usr/lib/debug/usr/lib64/libkrb5.so.3.3.debug...done.
done.
Loaded symbols for /usr/lib64/libkrb5.so.3.3
Reading symbols from /usr/lib64/libk5crypto.so.3.1...Reading symbols from 
/usr/lib/debug/usr/lib64/libk5crypto.so.3.1.debug...done.
done.
Loaded symbols for /usr/lib64/libk5crypto.so.3.1
Reading symbols from /lib64/libcom_err.so.2.1...Reading symbols from 
/usr/lib/debug/lib64/libcom_err.so.2.1.debug...done.
done.
Loaded symbols for /lib64/libcom_err.so.2.1
Reading symbols from /lib64/libkeyutils-1.3.so...Reading symbols from 
/usr/lib/debug/lib64/libkeyutils-1.3.so.debug...done.
done.
Loaded symbols for /lib64/libkeyutils-1.3.so
Reading symbols from /lib64/libresolv-2.11.3.so...Reading symbols from 
/usr/lib/debug/lib64/libresolv-2.11.3.so.debug...done.
done.
Loaded symbols for /lib64/libresolv-2.11.3

[squid-users] Re: log message oddities -- what do they mean? how to interpret?


Amos Jeffries wrote:





As documented this bundle had a lot of deep I/O and communication 
architectural changes. Instability is/was expected.


Most of the bugs you hit are now resolved in the daily update bundle.
If you need a relatively stable 3.2 release please use 3.2.0.8.


---
I thought 3.2.0.9 was latest stable in that series...my bad.

>>


Jul 13 22:28:49 Ishtar squid[10383]: /var/cache/squid/03/29/3A79
squid[10383]: DiskThreadsDiskFile::openDone: (2) No such file or 
directory

squid[10383]: /var/cache/squid/03/29/3A7B
squid[10383]: DiskThreadsDiskFile::openDone: (2) No such file or 
directory


But looking in those dirs. I can see those filesso what file doesn't
exist?
They are owned by user/group 'squid.squid', w/perms 640.
-


Either the file did not exist at the time that open was attempted, or 
the process logging that does not actually have squid:squid permission. 
Looks like it may be running as user "Ishtar".


---
That's actually the server name.   This one is still a bit
perplexing, but not going to worry about it.





NOTE, the above message are ***real*** important personally, as I'm
not getting them with my new build and using aufs.


---
^^really messed up that line, left out, the NOT, I was trying
to emphasize by all the '**' round 'real'...(talk about missing the the
forest for the trees!)...



Linux works fastest on AUFS, BSD systems works fastest on diskd. Due to 
a design problem in the AIO implementation of Squid which BSD runs up 
against.



Great to hear, since aufs has been working great for me..




But, that might be fixable if they are faster

Anyway -- (just noticed that log message above...as was looking at
current long
messages with 3.2.0.9... am getting many more messages/ much more
verbosity.
I guess my default 'build' settings are for a bit more 'noise'? (or I
don't have
something config'ed correctly in my squid.conf for the new 3.2).


"debug_options ALL,1" perhapse?
  There were some messages set at the wring verbosity in 3.2.0.9 and 
some bugs which cause loud complaints. I think we have got most of those 
ones out now.


---
Well, they gave me things to look for as to why things weren't
working... ;-)



So What's a !commHasHalfClosedMonitor(fd)...and why does it cause death?


pconn issues. We fixed those the other day, so the 
squid-3.2.0.9-20110714 dialy should be fixed.



Oddly enough I was getting MANY sig6's in suse's build and their's
is 3.1 based.  But it might be related to the directory they have 
configured for *squid*'s  'system shared state dir' (/comm in configure), 
is /var/squid, which I found missing, so maybe that was causing problems

with the stock suse rpm, though it wasn't working as well as the version
I'd build from 3.0-HEAD


Squid-3.2 IPC is (wrongly) hard-coded to use PREFIX/var/run instead of 
the system local state dir. We are in progress fixing that now.



It doesn't use what's from 'configure'?
Suse's config process uses a val of prefix=/usr, so
it would have been trying to use '/usr/var/run', which would
be a problem... ;-)

>> ---


just created it and will see if that fixes that...but now see:

assertion failed: mem.cc:511: "MemPools[t]"
---
Not sure why I saw this, but I twiddled some memory settings, though
I'm not really using any delay pools...hmmm...


The fix for this is nearly out of a change audit now I hope. You can 
safely work around it by erasing the assert line 511 in mem.cc



I would, and will probably try 3.2 again, as I a multicore
machine that I run squid on.  But to solve my problem, I gave up
on 3.2 for the nonce, and did a make from 3.HEAD (which a cron
job keeps updated daily).   I was surprised/'didn't get' that 3.HEAD
was only a devel-branch for 3.1 -- as the 3.2 incompat's/probs I had
in converting, on top of using the wrong (too 'new') version, all went
away when I went back to a build from 3.HEAD.

I could have resorted to backups, but wanted to work forward.

Am working through several follow probs from my server distro
upgrade (lots of consequential problems in refixing config stuff...)..
So when I get back to a stable 'network', Maybe I'll try finding the
3.2.HEAD (am presuming that's what it would be called...)

Glad to hear I wasn't just running into not knowing how
to configure something and that you've already fixed real probs.

All an interesting diversion ...

Thanks for the explanations...now that I know about 3.2 and its
features, I'm wanting to get it working...


Linda





[squid-users] log message oddities -- what do they mean? how to interpret?


Most recent info as at the bottom, but am curious about things I ran
into

Still have 1 unknown error and no estimate on load handling ability,
But think I will send this off now.

Hopefully others will be able to give it a gander and offer insights

Thanks!

Linda...



I recently upgraded to 3.2.0.9

(had been at 3.HEAD some time back, but then upgraded my server,
and squid3.1.11 got installed).

With it, I got messages like:

Jul 13 22:28:49 Ishtar squid[10383]: /var/cache/squid/03/29/3A79
squid[10383]: DiskThreadsDiskFile::openDone: (2) No such file or directory
squid[10383]: /var/cache/squid/03/29/3A7B
squid[10383]: DiskThreadsDiskFile::openDone: (2) No such file or directory

But looking in those dirs. I can see those filesso what file doesn't 
exist?

They are owned by user/group 'squid.squid', w/perms 640.
-

NOTE, the above message are ***real*** important personally, as I'm
not getting them with my new build and using aufs.

Seems like their build was using diskthreads.

I don't know if I wanted to use disk threads.  (do I?)

Isn't AIO faster or would disk threads be?  
Of course, aio doesn't seem to give errors like the ones above! ;-)


But, that might be fixable if they are faster

Anyway -- (just noticed that log message above...as was looking at 
current long
messages with 3.2.0.9...   am getting many more messages/ much more 
verbosity.
I guess my default 'build' settings are for a bit more 'noise'?  (or I 
don't have

something config'ed correctly in my squid.conf for the new 3.2).

But what do these mean (and why am I seeing them in the log if they are 
normal

-- they look like PROXY CONNECT requests(?), but that's just a guess...)

First set of weird messages (not getting these now that I have log 
settings set correctly

in squid.conf (just had filename)...am using daemon now).

   squid[27996]: forward.cc(96) FwdState: Forwarding client request
   local=192.168.3.1:8080 \
remote=192.168.3.140:51873 FD 13 flags=1,\
   
   url=http://technet.microsoft.com/en-us/library/dd349396(v=WS.10).aspx

   squid[27996]: IcmpSquid.cc(156) Recv: recv: (111) Connection refused
   squid[27996]: IcmpSquid.cc(282) Close: Closing Pinger socket on FD 22
   squid[27996]: storeLateRelease: released 0 objects
   squid[27996]: forward.cc(96) FwdState: Forwarding client request
   local=192.168.3.1:8080 \
remote=192.168.3.140:51873 FD 13 flags=1, \
   
   url=http://i3.technet.microsoft.com/Hash/489f173fe4c898064bb69f941d1e4ca3.css



 and many other Forwarding notes in log...

This looks like what I was seeing with 3.1 as well:  lots of fails... 
with SIG 6's


   squid[957]: assertion failed: comm.cc:1904:
   "!commHasHalfClosedMonitor(fd)"
   squid[7072]: Squid Parent: child process 957 exited due to signal 6
   with status 0
   squid[7072]: Squid Parent: child process 6023 started
   squid[6023]: Starting Squid Cache version 3.2.0.9 for
   x86_64-unknown-linux-gnu...
   squid[6023]: Process ID 6023
   squid[6023]: With 16384 file descriptors available
   squid[6023]: Initializing IP Cache...

...and restart...

  So What's a !commHasHalfClosedMonitor(fd)...and why does it cause death?

(before I sent this.going over and over config and 
squid.conf/configure/filesystem!/

addressings and have noticed some things and tried fixes...:

filesystem:  looks like both the suse version (and mine, I copied theirs 
as closely as
possible, as wanted it to use the same file locations -- presuming that 
they had set them

up properly*cough*)...

/var/squid was (is) set for the comm shared state dir, but it didn't 
exist on my machine.

---
just created it and will see if that fixes that...but now see:

assertion failed: mem.cc:511: "MemPools[t]"
---
Not sure why I saw this, but I twiddled some memory settings, though
I'm not really using any delay pools...hmmm...

Only error on the last restart was :

commBind: Cannot bind socket FD 18 to [::]: (13) Permission denied

Impact?  What won't work?  how do I shut it off if I don't need it?


Well it's been up for a while ... but to see if it handles load or I 
keep getting

'proxy refusing connections' (NOT GOOD with 1 user!!!)




[squid-users] Re: Your IP Address: INVALID IPV4 ADDRESS Located near: INVALID IPV4 ADDRESS, INVALID IPV4 ADDRESS (INVALID IPV4 ADDRESS


Amos Jeffries wrote:
DNSStuff address detection is broken. It assumes IPv4 addresses in the 
X-Forwarded-For header. This has always been a false assumption.


Squid-3.1 is IPv6 enabled software. So if the client connects to it over 
IPv6 network the address listed will not be an IPv4.


The squid cache.log when logging with debug section 11,9 contains a lot 
of info about the HTTP protocol coming and going through squid. 
Including the headers.


Amos


It may be broken in detecting ipv6, but it's broken
in detecting ipv4 as well

It's calling some geolocation script that displays that
text -- I think they just have a buggy detection script...

Might say something about their Ip monitoring software...




[squid-users] Re: Accelerating Proxy options?


Amos Jeffries wrote:

On Tue, 19 Apr 2011 13:31:38 -0700, Linda Walsh wrote:

   Picture this: I (on a client sys) pull in a web page.  At same time
I get it, it's handed over to a separate process running on a separate 
core


read -> copy to reader thread buffer -> copy to processing thread buffer 
-> copy to result output buffer (maybe) -> coy to writer thread buffer 
-> write.


2x slowdown *on top of* the above processing scan lags. This exact case 
of multiple copying is one of two reasons we do not have threading in 
Squid.


I think you are missing an important point (maybe?)
The current fetch would stay as is.  I.e go str8 to client --
only after sent to client would a copy be sent off to the 'prefetcher', 
so at worst, prefetcher does it's scan and finds out stuff is already
fetched by client-requests, but in best case, prefetcher would do it's 
parsing before the client asked for the page-reqs and they would,

theoretically, already be 'in process' of being fetched.



  Anyway, just some wonderings...
What will it take for Sq3 to get to the feature level of Sq2 and 
allow,


What we are missing in a big way is store-URL and location-URL re-writers.

===
I'm so out of it, I don't even know that terminology -- I could 'guess', 
like maybe 1) being rewriting the URL of content stored in the cache to
be more easily reusable, and 2) same as 1, but hmmm...rewriting requests 
from clients?...  Ah heck...I'm probably writing out my toes again...

(like speaking out some wrong orifice, but for writing v. speaking! ;-)).

Oh well nevermind...my floor is covered with drippings from overflowing plate.
**slog**





I've had a handful of people stick their hands up to do this over the 
last year or two. Pointed them at the squid-2 patches which need 
adjusting to compile and work in squid-3 code. Never to hear from them 
again. :(

---
Sigh...probably mean well, but when they get in there, they find
they are way over their heads...or at least alot more work than they
thought it'd be...  not that I don't run into that many times in trying
to change something or other on OpnSrc sw...geez...
Hardly anything is just simple anymore...
My home network is held together with bailing wire and duct tape
and even then it's unstable in weird ways.




[squid-users] Re: Accelerating Proxy options?


Amos Jeffries wrote:

On Mon, 18 Apr 2011 18:30:51 -0700, Linda Walsh wrote:

[wondering about squid accelerator features such as...]
1)  Parsing fetched webpages and looking for statically included content
 and starting a "fetch" on those files as soon as it determines
 page-requisites


Squid is designed not to touch the content. Doing so makes things slower.


   Um, you mean:  "Doing so can often make things slower."   :-)

   It depends on the relative speed of CPU speed (specifically, the 
CPU speed of the processor where squid is being run) vs. the external 
line speed.   Certainly, you would agree that if the external line 
speed is 30Bps, for example, Squid would have much greater latitude
to "diddle" with the content before a performance impact would be 
noticed.


   I would agree that doing such processing "in-line" would create
a performance impact, since even right now, with no such processing being
done, I note squid impacting performance by about 10-30% over a direct
connection to *fast* sites.  However,  I would only think about doing
such work outside of the direct i/o chain via separate threads or processes.

   Picture this: I (on a client sys) pull in a web page.  At same time
I get it, it's handed over to a separate process running on a separate core
that begins processing.  Even if the server and client parse at the same
speed, the server would have an edge in formulating the "pre-fetch" 
requests simple because it's on the same physical machine and doesn't 
have any client-server latency).  The server might have an additional 
edge since it would only be looking through fetched content for "pre-fetchables" and not concerning itself with rendering issues.


There are ICAP server apps and eCAP modules floating around that people 
have written to plug into Squid and do it. The only public one AFAICT is 
the one doing gzipping, the others are all proprietary or private projects.

---
  Too bad there is no "CSAN" repository akin to perl's CPAN as well
as a seemingly different level of community motivation to adding to such
a repository.





2. Another level would be pre-inclusion of included content for pages
that have already been fetched and are in cache.  [...]


ESI does this. But requires the website to support ESI syntax in the 
page code.

---
  ESI?  Is there a TLA URL for that? ;-)  



  Anyway, just some wonderings...
  
  What will it take for Sq3 to get to the feature level of Sq2 and allow,
for example, caching of dynamic content?  


  Also, what will it take for Sq3 to get full, included HTTP1.1 support?

  It __seems__ like, though it's been out for years, it hasn't made much 
progress on those fronts.  Are they simply not a priority?


 Especially getting to the 1st goal (Sq3>=Sq2), I would think, would 
consolidate community efforts at improvement and module construction
(e.g. caching dynamic content like that from youtube and the associated 
wiki directions for doing so under Sq2, which are inapplicable to Sq3)...

(chomping at bit, for Sq2 to become obviated by Sq3)...





[squid-users] Accelerating Proxy options?


I was wondering if anyone had write a module for squid to change it into
an 'accelerator', of sorts.

What I mean, specifically -- well there are a couple of levels.  S

1)  Parsing fetched webpages and looking for statically included content
(especially .css, maybe .js, possibly image files) and starting
a "fetch" on those files as soon as determines which files are
going to be needed to render the page.  By 'render', I mean something
along the lines of "wget"'s --page-requisites

Theoretically, squid would have an edge as it sees the information
first, and could start fetching all of the needed content in parallel
ASAP (of course if it isn't needed, or the client fetching that page
stops the render, existing, outstanding requests based on that page
could be aborted.


2. Another level would be pre-inclusion of included content for pages
that have already been fetch and are in cache.

I.e.   Suppose a page is fetched and it's known that it
includes 3-4 different css pages.  If it is a commonly fetched page,
rather than having each client do multiple fetches -- some of which
may involve nested css files -- meaning a client will have to
parse and ask for more (adding multiple Round-Trip-Times/RTT's) to
the page's render time.  


Depending on load/RTT and sizes, there could be a significant speedup
to the client if those files were all concatenated into 1 file,
so instead of getting:

index.html
   sales.css
  corp.css
  dept.css
   standard.css
   support.css
  enduser.css
   ...
   other includes but involving "user?xxyz" (i.e. likely non-static)

they'd get:

index-[hexid].html
   (includes all static css)
   ...
   (but still has includes for non-static includes)

-
   That way, multiple RTT's and extra fetches could be
eliminated.

   OF course, the benefit of this would depend on the amount
of processing time it took to do this type of processing vs. the
"Cost" (usually just 'time', but in some cases, 'money' as well).


I was just curious if anyone had thought about modules for squid
that would do this, or if squid would even be suitable for hosting
and/or including such extensions?

Thanks,
Linda





[squid-users] Re: squid cache prob: won't cache a 'pdf'


Eliezer Croitoru wrote:

well i managed to make it being cached using specific rule.
and your rule should do the trick
but look at the difference between our rules:
refresh_pattern -i ^http://www\.lsi\.com/.*AssetMgr\.aspx\?asset.* 4320 
70% 10080

leave the address ^^ alone
 refresh_pattern .   0 20%   4320ignore-no-store ignore-no-cache 
ignore-private ignore-auth override-expire reload-into-ims

^
your minimum time that you are using is 0 so you can try it for 2 
minutes also in the case you are breaking the http protocols.

---
Yeah, I could go for a minimum, but as you note, that pattern is a 'general
pattern' and I don't want to go breaking things I don't have to.



i must tell you that a proxy with this kind of settings on the "." 
pattern can lead to a lot of troubles for the users.
so for for problematic sites that do not allow or want to be cached you 
dont need  to make your whole server a mess of wrong refresh patterns.

---
Could -- but haven't in 10 years using that pattern...



it's my line of thinking and it can also be a bug in the squid server 
but i did mange to cache the file using the 3.2.0.5.
and i think that also the older versions will do the trick on this 
specific case.



Well, that's what I was wondering -- I'll try a more specific
pattern, but the point was that something like those 'pdfs', I thought,
should just be cached!

There's nothing special about them other than I happened to load
them more than once and wondered why it took so long for static content that
I thought, should have been cached.

That's what got me to running my 'squidlog' monitoring script that 
produces the short output in the basenote.

From there, I started massaging options -- trying to figure out why
it wasn't caching...

	Now, I guess -- it's down to 'must be some bug...'...   


Been a while since I recompiled off the latest bzr, so maybe that's
the next step...




[squid-users] Re: squid cache prob: won't cache a 'pdf'


Eliezer Croitoru wrote:

On 07/04/2011 11:52, Linda Walsh wrote:


Amos Jeffries wrote:

Marked explicitly as "private" - aka cannot be cached by any 
middleware proxy (such as Squid) which may send it to other users. 
May be cached by a personal cache such as the browser storage.

---
But I don't have to log in.

More importantly, wouldn't setting the 'ignore-private' in the refresh 
pattern override that?




after adding :
refresh_pattern -i ^http://www\.lsi\.com/.*AssetMgr\.aspx\?asset.* 4320 
70% 10080 override-expire override-lastmod reload-into-ims ignore-reload 
ignore-no-store ignore-private



Umquoting from my original note, I have (all on 1 line, no '\'):

 refresh_pattern .   0 20%   4320ignore-no-store \
  ignore-no-cache ignore-private ignore-auth override-expire \
  reload-into-ims 


Is this what you are referring to by:


you can use the squid config directive
http://www.squid-cache.org/Doc/config/refresh_pattern/



Is there something wrong with the refresh pattern I have?

I don't understand what you are trying to get me to correct or
trying to get me to read at the above URL.



[squid-users] Re: squid cache prob: won't cache a 'pdf'


Amos Jeffries wrote:

Marked explicitly as "private" - aka cannot be cached by any middleware 
proxy (such as Squid) which may send it to other users. May be cached by 
a personal cache such as the browser storage.

---
But I don't have to log in.

More importantly, wouldn't setting the 'ignore-private' in the refresh pattern 
override that?




[squid-users] squid cache prob: won't cache a 'pdf'


I was downloading some product documentation from the
documentation section on:
http://www.lsi.com/channel/products/jbods/sata_sas_jbods/630j/index.html

Specifically, I tried:

http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54432
http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54841
http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54435

They all load smallish pdf's:
(from log monitor:)
   +63.50  346ms; ln=473  (1.3K/7.4) TCP_MISS/200 http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54841 - 
HIER_DIRECT/www.lsi.com application/pdf ]
   +7.01   220ms; ln=462  (2.1K/65.9) TCP_MISS/200 http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54435 - 
HIER_DIRECT/www.lsi.com application/pdf ]
   +6.21  23914ms; ln=5051477(206.3K/795.4K) TCP_MISS/200 [GET 
http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54432 - 
HIER_DIRECT/www.lsi.com application/pdf ]




Now I've tried several mods in my squid.conf file (how do you
squid to display it's version?  I tried --version, but
no go) -- am running something like Squid 3.2.0.4 (at least
it's the last entry in the 'Changelog' on disk; it signs on
as "Head-BZR").

Things I have tried:
1) commenting out:
   'acl QUERY urlpath_regex cgi-bin \?'
   'cache deny QUERY'
2) adding back:
   'acl QUERY urlpath_regex cgi-bin \?'
   'cache allow QUERY'## Note changed it to 'allow'
3) commenting out:
   'hierarchy_stoplist cgi-bin ?'
  Note -- didn't think I needed this, as I had no other
caches I was querying from, but a comment further on down
under 'nonhierarchical_direct', said,

  "By default, squid will send any non-hierarchical
   requests (matching hierarchy_stoplist or not cachable
   request type) direct to origin servers.  If you
   set this to off, Squid will prefer to send these request
   to parents."

I took the comment to indicate that if something was in the
hierarchy_stoplist, it would also prevent caching, thus my try
in disabling it
4) In my refresh patterns, I have entries for ftp and gopher
and one for ".": (which presumably would match everything else):

   refresh_pattern .   0 20%   4320

To that line I have tried adding a bunch of keywords
(note, it's all 1 line in the squid.conf file, no backslashes):

   refresh_pattern .   0 20%   4320ignore-no-store \
   ignore-no-cache ignore-private ignore-auth override-expire \
   reload-into-ims

The only ones I haven't tried yet are 'refresh-ims',
'override-expire' and 'override-lastmod', but those shouldn't
be needed and might cause more headaches than it is worth.

Is there something I'm missing?  This seems like it should be
'simple'.

*sigh*
Linda



Relevant log file entries are below (access, cache, store...)





The full entry (from access.log) from one of the above shows:

1302116600.765108 192.168.3.140 TCP_MISS/200 468 HEAD 
http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54432 - 
HIER_DIRECT/www.lsi.com application/pdf [Host: 
www.lsi.com\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; 
en-US; rv:1.9.2.16) Gecko/20110319 Firefox/3.6.16\r\nAccept: 
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json\r\nAccept-Language: 
en,en-us;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: 
UTF-8,*\r\nKeep-Alive: 1800\r\nProxy-Connection: keep-alive\r\n] 
[HTTP/1.1 200 OK\r\nDate: Wed, 06 Apr 2011 19:03:16 GMT\r\nServer: 
Microsoft-IIS/6.0\r\nX-Powered-By: ASP.NET\r\nX-AspNet-Version: 
2.0.50727\r\nContent-Disposition: attachment; 
filename=JBOD_Enclosures_Guide_080310.pdf\r\nSet-Cookie: 
ASP.NET_SessionId=vgzglkahj1njarzzn4yooun3; path=/; 
HttpOnly\r\nCache-Control: private\r\nContent-Type: 
application/pdf\r\nContent-Length: 5051083\r\n\r]





Store.log shows that is 'releasing' it instead of storing it:

1302116600.765 RELEASE -1  F40B797155CE4FEC4BC72BD28966D753  200 
1302116596-1-1 application/pdf 5051083/0 HEAD 
http://www.lsi.com/DistributionSystem/User/AssetMgr.aspx?asset=54432





cache.log for last startup:
--
2011/04/06 11:59:41 kid1| Starting Squid Cache version 3.HEAD-BZR for 
x86_64-suse-linux-gnu...

2011/04/06 11:59:41 kid1| Process ID 31410
2011/04/06 11:59:41 kid1| With 4096 file descriptors available
2011/04/06 11:59:41 kid1| Initializing IP Cache...
2011/04/06 11:59:41 kid1| DNS Socket created at [::], FD 8
2011/04/06 11:59:41 kid1| DNS Socket created at 0.0.0.0, FD 9
2011/04/06 11:59:41 kid1| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2011/04/06 11:59:41 kid1| Adding nameserver 192.168.3.2 from 
/etc/resolv.conf

2011/04/06 11:59:41 kid1| Adding ndots 2 from /etc/resolv.conf
2011/04/06 11:59:41 kid1| User-Agent logging is disabled.
2011/04/06 11:59:41 kid1| Referer logging is disabled.
2011/04/06 11:59:41 kid1| Logfile: opening log /var/log/squid/access.log
2011/0

[squid-users] simplest way to block (and drop) 1 'user'(computer) using 1 specific 'URL' ??






I purchased a little toaster-sized HP home-server that I haven't fully made
use of, but that does have an annoying feature.  It's **constantly** sending
messages to a ms-server.  Maybe it's some sort of I'm alive pulse, but it's
annoyingly filling up my squidlog, and always using up/interrupting 
normal traffic bin __minor__ amounts as it constantly does an HTTP 
version of

a ping that runs *almost* all the time.

Here's a snipped from a 'cooked' log format I use to give me a quick 
view into what's going w/squid:
   +0.19   182ms; ln=1579 (8.5K/8.4K) TCP_MISS/403 http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.18   173ms; ln=1579 (8.9K/8.9K) TCP_MISS/403 http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.17   164ms; ln=1579 (9.4K/9.3K) TCP_MISS/403 http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.20   191ms; ln=1579 (8.1K/8.0K) TCP_MISS/403 http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]
   +0.15   145ms; ln=1579 (10.6K/10.5K) TCP_MISS/403 http://sqm.microsoft.com/sqm/Windows/sqmserver.dll - 
HIER_DIRECT/sqm.microsoft.com text/html ]

---

It just keeps going this -- occasionally it will stop for a few minutes, 
but most of the time it's doing these little several-K requests. 

Is there an easy way in squid to say "if requester='home-server' and 
request address = 'http://sqm.microsoft.com/sqm/Windows/sqmserver.dll', 
then DROP the request (and issue nothing in the log).


There are more crude methods of shutting up, like one time, since it is 
going through the proxy-server to get to the outside world, I just threw 
in an ipchains rule to ignore it altogether.   Fast, but a bit crude.  I 
don't want to cut off all internet access -- just that one, constant 
droning request that just goes on and on...(filling logs, but most of 
all, always reducing my full bandwidth)...


What a pain in the butt!

Talk about products that 'phone home'This one whines to home about 5 
times/second!  LAME!


I currently have no other filtering going on in my squid files, so I'm 
not really sure where to start.  Do I need to write an external helper 
and filter all traffic through it?  That sounds like overkill -- and I'd 
really not wish to slow down traffic from other stations -- I already 
get too many 'sorry but your browser is configured to use a proxy which 
is not responding' messages, now, as it is -- and ***I'M THE ONLY 
USER!!!***...   (very sad when 1 user can overwhelm a proxy server 
designed to handle hundreds (if not thousands) of users...  But that's 
question for another day (like after I've pulled the latest source and 
tried it to see if it is fixed...;-))



Thanks!

Linda Walsh



[squid-users] squid speedup to client using TCP fast start?




There's an article pointed to by slashdot @ 
http://blog.benstrong.com/2010/11/google-and-microsoft-cheat-on-slow.html
where the author found that instead of a slow start of sending a packet 
or two and waiting for an ACK, some sites like Google and Microsoft, 
optimize for their initial web page display by NOT "slow starting" and 
sending 4 or more packets w/o waiting for the initial ACK.  This gives 
them a LARGE boost for that initial page -- and would for any initial 
start of a TCP connection.


Since many connections to squid are small TCP session, it seems 
eliminating the 'slow start' might provide a significant boost when 
loading pages with browsers that use many small TCP connections.


Is this something the squid designers have given any consideration to 
for inclusion as an option -- is it something that could be done when 
setting up a connection to a remote server? I.e. when 'fetching', is it 
possible to set the initial window 'larger' -- since most of the benefit 
comes from using a larger window where the RTT is 'large' (~>30ms).


If the RTT times between the squid cache and the client are very low 
already, then the benefit wouldn't be as noticeable.


Some research papers from presentations on the benefit of increasing the 
initial startup window:

Abstract:
  http://www.google.com/research/pubs/pub36640.html
Full Paper:
  http://www.isi.edu/lsam/publications/phttp_tcp_interactions/node2.html

Some simulations from 1998 on the value of increasing the initial TCP 
window size:

  http://www.rfc-archive.org/getrfc.php?rfc=2414

Apparently, a patch may be necessary to give applications control over 
this, this patch was shown here:

   http://www.amailbox.org/mailarchive/linux-netdev/2010/5/26/6278007

-


Also another method of speeding up web page delivery has to do with
HTTP pipelining -- however, some paper (forget the link, had too many 
open and then the window crashed...ARG)...said this wasn't widely used 
due to problems with proxy support.


Doesn't (or does?) squid support this?

Just some general questions...

Thanks!
Linda




[squid-users] proxy 'busy' too often: (errmsg: firefox is configured to use a proxy server that is not responding)





This may be a basic question, but 'more often than I would like',
if I try browse 'too fast' I will see a message from firefox about
it being configured to use a proxy which is not responding.

All I have to do is reload that page, and it loads -- i.e. it's a
temporary problem, but it's annoying and time wasting.  It happens
when I've opened more than one link in background and squid can't
keep up with the freshly opened links of more than one page.

Obviously, this isn't a squid problem, its a 'user-configuration' problem,
since squid handles alot larger loads than 1 user opening web-pages
very fast, sequentially!  Is there some number of threads some place that
I should be looking to 'turn up'?


FWIW, I config & gen my own squid (from some recent branch -- varies
by whether or not I'm seeing some problem and have pulled source to see
if it is fixed (usually is) before reporting it.  But this seems a bit
to uncertain to leave entirely to chance.  I'll include my config options
below, in case something there glaringly stands out as lame.

Thanks for any ideas...  It happens *infrequently*.  But it shouldn't
be happening "at all", so that's why I'm wondering if I have
something misconfigured.

Thanks!
Linda

(configure script follows)

export CFLAGS="-fgcse-after-reload -fpredictive-commoning 
-frename-registers -ftracer -fbranch-target-load-optimize 
-fbranch-target-load-optimize2 -march=native"

export CCFLAGS="$CFLAGS"

configure --enable-disk-io  --enable-async-io=48  --enable-storeio  
--enable-removal-policies  --disable-htcp  --enable-ssl  
--disable-ident-lookups  --enable-external-acl-helpers  --with-dl  
--with-large-files  --prefix=/usr  --sysconfdir=/etc/squid 
--bindir=/usr/sbin --sbindir=/usr/sbin --libexecdir=/usr/sbin 
--datadir=/usr/share/squid --libdir=/usr/lib64  --localstatedir=/var 
--enable-ecap --with-default-user=squid --enable-icap-client 
--enable-referer-log --disable-wccp --disable-wccpv2 --disable-snmp 
--enable-cachemgr-hostname --disable-eui  --enable-delay-pools 
--enable-useragent-log --enable-zph-qos --enable-linux-netfilter 
--disable-translation --with-aufs-threads=32 --disable-strict-error-checking




[squid-users] Re: Squid3 issues

Gmail wrote:
> I have used many softwares, packages, compiled stuff for years, never
> ever had an experience such as this one, it's a package full of
> headaches, and problem after problem, And to be honest the feedback I
> get is always blaming other things, why can't you people just admit that
> Squid doesn't work at all, and you are not providing any help
> whatsoever, as if you expect everyone to be an expert.

 I've only seen one post by you on this list -- and that was about
increasing your linux file descriptors at process start time in linux
-- not something in the squid software -- but something you do in linux
before you call squid.  It *** SHOULD*** be in your squid's
/etc/init.d/squid startup script. --  you should see a command "ulimit -n
".

I have "ulimit -n 4096" in my squid's rc script.

It is a builtin in the "bash" script.  I don't know where else it is
documented, but if you use the linux-standard shell, "bash", it should
just work.  "-n" sets the number of open file descriptors.


> I uninstalled the version that was packaged with Ubuntu hardy, I am
> trying to compile it so I won't have the same problem, with the file
> descriptors, I followed exactly the suggestions in the configure --help
> menu, yet I am getting an error, like Compile cannot create executable,
> or something to that effect.

Maybe you should try a distribution where it is 1) known to work, or
2) already has a pre-compiled binary.

Try opensuse.org. It's what I use.  It works flawlessly out of the
box. (from http://www.opensuse.org/en/).

Everyone will have their favorite and tell you how well it works.
That one is mine (for the nonce).  Been using it for several years -- the
fact that they have gotten seed money from Microsoft -- means also that
they have worked to add support for the new Vista/Win7 networking stacks
which supports various advanced device functionality (either a pain in the
ass or a bonus depending on whether or not you have such equipment and
want it to work).  

The fact that it is in there doesn't mean you can't turn it off and
delete it (which I did).  Now am working to turn it back on as I get some
win-media enabled devices on my network. (My new TV speaks those protos!
(but doesn't work over squid!) -- but my new Blu Ray DVD player (Sony),
used proxy autodetect (http://wpad/wpad.dat), and worked through my squid
proxy the first try!...was quite pleased with that.

So

> After three weeks I managed to get my clients to have access to the
> internet, and many applications didn't work, such as Yahoo, Msn, Steam
> and so on, when I ask for help, nobody has an answer including some
> members of the team.
-
Some of these are problems - you have to contact the application
writers and get them to use HTTP PROXIES -- because they IGNORE your
HTTP_PROXY settings and attempt to go direct.

This is due to no fault of squid, but the misbehaving applications.
They only way to proxy them would be to use a transparent proxy which both
a pain, and maybe not worth the bother, as you have to let them connect to
any address at port "whatever" not all use port 80. 

Worse -- not all use TCP -- some use UDP which squid doesn't handle at
all.  In those cases, all you can do is setup NAT on your firewall and let
them talk through it.  Not great for security, but the writers of those
apps don't care about your security -- just their apps.  So you conform to
them or you don't run their apps -- nothing to do with squid.


None of this has anything to do with squid people -- nearly all your
problems are with the apps you are running -- they write their apps NOT to
work with proxies.  When they do that -- they are not going to work with
squid. 

Only well-behaved apps that work through some proxy (ANY PROXY!), will
work with squid.  Those that are ill behaved are just poorly behaved
children that refuse to 'get with the program'...  

Whatcha gonna do?


> If anybody can prove me wrong:

Consider yourself "proven wrong"you are pointing your fingers in the
wrong place.

*peace*, Linda




[squid-users] media center and squid -- tellin squid to pass 'direct' to allow http1.1?


Has anyone gotten Windows media center to work through squid?

I just tried it and whenever it got to 'content', I saw lots of
"bad-gateway" mssages right after the HTTP/1.0 returned by squid.  Just
before that, I saw a bumch of SSDP requests looking for HTTP/1.1 -- but
only thing it got back was an HTTP/1.0.

The remote server kept sending back "bad gateway" -- after about 30
attempts, the client gave up and returned "Video not available at this
time."...

So What I'm wondering is if it is possible to have squid not cache
attempts 

>From there, I eventually (on the player) got "Video is not available at
this time".

So I was wondering if it was possible to setup some sort of ACL type list
to tell squid to pass through 1.1 requiring requests so they wouldn't fail
-- wouldn't be cached, but better not cached than complete failure.

Is this possible or has anyone done this?

Thanks in advance!..

Linda






[squid-users] Re: Squid 3.1.0.13 Speed Test - Upload breaks?


jay60103 wrote:

I'm using  Version 3.1.0.6 and speakeasy.net doesn't work for me either.
Download test okay, but when it starts the upload part it fails with "Upload
test returned an error while trying to read the upload file." 


FWIW, this speed test works for me using 3.HEAD.BZR (head
version of 3.1.0.15).

I have pipeline_prefetch set to 'on'.

-l



[squid-users] Re: RFE - HTTP 1.1 RANGES


Amos Jeffries wrote:

Linda W wrote:

If I missed this, please let me know, but I was wondering why
HTTP 1.1 wasn't on the list on the roadmap?  I don't know all
the details, but compression and RANGES are two that could
speed up web usage for the average user.


Not sure which roadmap you are looking at. HTTP/1.1 is on the TODO list 
of Squid-3.

http://wiki.squid-cache.org/RoadMap/Squid3#TODO
http://wiki.squid-cache.org/Features/HTTP11

---
I found it later...it was a little bit buried?  :-)



A lot of the little details have been done quietly. Such as fixing up 
Date: headers and sending the right error status codes, handing large 
values or syntax in certain headers correctly.

---
	Wondered about that. My experience is that entities wouldn't 
tend to want to fund such work as standards adherence is often considered

something that should just 'be there'.


I've started working on some experiments towards Expect-100 support 
recently, but its early days on that.

---
That looks pretty messy...


Ranges, it seems to me, could be kept in a binary-sized linked-list of 
chunks corresponding to the content downloaded
'so far'. ... 


Nice ideas. The range support AFAIK has always been stuck up on detail 
of storing ranges.

---
I wondered about that -- it stuck in my head for a long while as well, 
until I thought that the problem was similar to how XFS stores files 
(power-of-two sized 'extents') and how a file could also be 'sparse' .. seemed 
like a general idea that might be matchable to the content caching issue.


A storage engine matching that spec above it would be very welcome.

---
Don't hold your breath on my account.  I have limited use of hands
and wrists, so while they occasionally allow some programming, I can be the
somewhat indelicate if I get caught up in a programming task.  I have to
arrange my computer work to not overfocus on any one task so my body parts
get a chance to rest -- and even then it's easy for my head to get ahead
of my body's limits.  Usually tolerable, except when I get too jazzed about
something, then it's really an annoying drag.  Result is unpredictability
in getting anything done in a time frame, and no, I'm not working. ;^).


For 3.1+ a third-party eCAP module exists for gzip/deflate compression 
in-transit of body content. That can use either eCAP or ICAP to do the 
compression.

---
	That's a good for me to be trying a 3.1 build and not just a latest 3.0. 


Linda


[squid-users] Re: problem building squid 3.1 from source...(right list?)


Amos Jeffries wrote:

Linda Walsh wrote:

I'm getting an error that 'AIO' isn't found (I'm specifying
aio on the command line as I have libaio installed.


Exactly what ./configure command line?


configure --enable-disk-io="AIO,Blocking,DiskDaemon,DiskThreads" --enable-async-io=8 
--enable-storeio="aufs,coss,diskd,ufs" --enable-removal-policies="heap,lru" --enable-icmp 
--disable-htcp --enable-ssl --enable-linux-netfilter --enable-ipf-transparent --disable-ident-lookups 
--enable-external-acl-helpers="ip_user,ldap_group,mswin_lm_group,session,unix_group,wbinfo_group" --with-dl 
--with-large-files --prefix=/usr --libdir=/usr/lib64 --docdir=/usr/share/doc




ufs is the older slower alternative you should still be able to use 
while the AIO problem is being resolved. When AIO is working you can 
switch between the two without any problem or loss of cache.

---
	I'm not totally w/o...  Current is (using SuSE's 
package nomenclature) squid3-3.0.STABLE10-2.12.


I'm trying to build squid-3.1.0.15.


Do you have the development version of the libraries installed? 
"libaio-dev" or something like that.

---
As near as I can tell :-)..

rpm -qa|grep aio

libaio-devel-0.3.104-104.51
libaio-0.3.104-104.51




According to redbot.org that website is breaking the HTTP protocol.

---
Will have to see if I can let them know.  Tried email, but
no answer (I've had emails sent from their site 'bot', never
reach me, so they may have email problems as well).  Hadn't heard of
'redbot.org' before.  Will have to bookmark it.  It's report isn't as
concise or informative as your summary.   :-)


"3.0.10" being 3.0.STABLE10 ?


Yes...I didn't realize STABLE was part of the version string.
Thought it was something SuSE added (since 3.1.0.15 doesn't say
3.1-BETA15...:-) )

So if I see it in the DiskIO dir, ...


ls

AIO/ DiskFile.h   DiskIOStrategy.h  modules.sh* WriteRequest.cc
Blocking/DiskIOModule.cc  DiskThreads/  ReadRequest.cc  WriteRequest.h
DiskDaemon/  DiskIOModule.h   IORequestor.h ReadRequest.h

I should be able to specify it in the config?  Am I creating some other 
conflict?

Thanks!
-linda


[squid-users] problem building squid 3.1 from source...(right list?)


I'm getting an error that 'AIO' isn't found (I'm specifying
aio on the command line as I have libaio installed.

If I leave enable-diskio blank, I don't know what I am getting, but
it fails on the storeio param next with "aufs" not found.

It seems to detect that I am a linux-unknown-x86-64, and I see source modules 
under the source tree -- so why isn't it finding them?

Is it that aufs isn't available on my platform?  But the same 
doesn't seem true for aio -- or do I have the wrong brand or flavor

or...why might it not be detecting my 'aio'?

FWIW, I'm trying to rebuild with latest to see if I still have
a problem with a website that suddenly went 'dark' when accessed
with squid a few days ago.  No pages come up. 

Main problem website is animepaper.net (AP for short).  It's a 
graphics intensive site, so there's much benefit in having 
a large cache like squid.


   If I go around squid (like through a socks proxy),
I can see the website (though going through socks is comparatively 
painful compared to squid - I didn't realize how much squid was 
offloading from win client!) -- my ff client froze up under, what was
for me, a relatively light load, going through socks.  


   Anyway -- any ideas why I'm having problems building?  If anyone
has a clue about 'AP', that'd be appreciated too.  I'm currently 
running a 3.0.10 on SuSE 11.1.


Thanks,
-linda


Re: [squid-users] squid Make; bug in Makefile?


Henrik Nordstrom wrote:

The perl scripts have recently been replaced by awk in Squid-3, but I
forgot to make the awk scripts included in the tar ball (available in
CVS however).

Regards
Henrik

---
Funny -- I had gone through a little story in my head about how
the perl scripts were likely newer and the awk scripts were left over -- how
I actually remember using awk back before perl was around and how a switch
to perl was likely done to make the code easier to maintain as more people know
perl than an arcane utility like awk.  :-)

	Now I find it's just the opposite and go through the opposite rationalization 
-- that moving to awk creates a smaller software requirement for someone wanting 
to build squid (presuming they don't need to modify it).  I
don't know that I can do as good a job rationalizing the choice for a switch to 
awk though.  Even though awk may be a smaller footprint for generation (build),
I not sure that benefit would outweigh the decrease in # of people who would 
know how to modify it.  ??


One "could" make a similar argument for using "C" vs. "C++", as "C" is
a 'lower common denominator" in terms of software tools and developer knowledge,
though such a change would affect alot more than "2" scripts. :-)


	I tried looking, BTW, for a pointer to the CVS sources, but wasn't able to 
readily find a pointer to the CVS sources from the main site (and didn't know 
about the "devel" site until after I'd started looking through the sources from 
the tarball)...   Maybe I should try fixing that problem before looking too

deeply at other problem(s) I ran into...

Linda