RE: [squid-users] Re: Delay Pools for Robots

2004-12-22 Thread Kent, Mr. John \(Contractor\)
Adam,

Thank you for replying.  Here is my second delay pool attempt.
Do you think it will serve the intended purpose,
slowing down robots while allowing humans full speed access?

Does using buckets have any detrimental impact on the
Squid machine's load?  My overall goal is to try to minimize robot's impact
on machine load on BOTH the Squid server machine and the back-end webservers
its accelerating.

Are any special build configuration parameters required to use "browser"?

# Common browsers
acl humans browser Explorer Netscape Mozilla Firefox Navigator Communicator 
Opera Safari Shiira Konqueror Amaya AOL Camino Chimera Mosaic OmniWeb wKiosk 
KidsBrowser Firebird

# Delay Pools
delay_pools 2  # 2 delay pools
delay_class 1 2# pool 1 is a class 2 pool for humans
delay_class 2 2# pool 2 is a class 2 pool for robots
delay_access 1 allow humans
delay_access 1 deny all
delay_parameters 1 -1/-1 64000/64000
delay_parameters 2 -1/-1  7000/8000  # Non-humans get this slow bucket

Thank you,

John Kent
Webmaster
NRL Monterey
http://www.nrlmry.navy.mil/sat_products.html


-Original Message-
From: news [mailto:[EMAIL PROTECTED] Behalf Of Adam Aube
Sent: Tuesday, December 21, 2004 5:40 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] Re: Delay Pools for Robots


Kent, Mr. John (Contractor) wrote:

> Have an image intensive website (satellite weather photos).
> Using Squid as an accelerator.
> 
> Want to slow down robots and spiders while basically not
> affecting human users who access the web pages.
> 
> Would the following delay_pool parameters be correct for this purpose
> or would other values be better?
> 
> delay_pools 1  # 1 delay pools
> delay_class 1 2# pool 1 is a class 2 pool
> delay_parameters 1 -1/-1 32000/64000

This makes no distinction between robots and normal visitors. For that you
can use the browser acl (which matches on the User-Agent string the client
sends), then use different delay pools for the common browsers and robots.

Adam



[squid-users] Delay Pools for Robots

2004-12-21 Thread Kent, Mr. John \(Contractor\)
Greetings,

Have an image intensive website (satellite weather photos).

Using Squid as an accelerator.

Want to slow down robots and spiders while basically not
affecting human users who access the web pages.

Would the following delay_pool parameters be correct for this purpose
or would other values be better?

delay_pools 1  # 1 delay pools
delay_class 1 2# pool 1 is a class 2 pool
delay_parameters 1 -1/-1 32000/64000

Thank you,
John Kent
Webmaster
NRL Monterey

http://www.nrlmry.navy.mil
http://www.nrlmry.navy.mil/tc_pages/tc_home.html
http://www.nrlmry.navy.mil/nexsat_pages/nexsat.html



[squid-users] Load Balancing with Cache_Peers

2004-07-14 Thread Kent, Mr. John \(Contractor\)
Greeting Squid Gurus,

I read an interesting article on Load Balancing in Zope with Squid as an accelerator.
http://www.zope.org/Members/htrd/howto/squid

I wanted to try it using Apache servers as a backend instead of Zope
The problem is the article didn't quite have enough info for me to figure out
how to do it (I did send the author an email) so was hoping someone in
this list could fill me in.

"Squid can also make http requests to other caches, which Zope can understand. Squid 
contains some sophisticated logic for managing connections to a pool of other caches, 
and these features prove to be useful for managing a pool of backend Zope servers too"

According to the page I just need to add to my squid.conf  (replaced their 
"backendzope" with "backendApacheName")

"
cache_peer backendApacheName1.dmz.example.com parent 8080 8080 no-digest 
no-netdb-exchange round-robin
cache_peer backendApacheName2.dmz.example.com parent 8080 8080 no-digest 
no-netdb-exchange round-robin

acl in_backendpool dstdomain backendpool
cache_peer_access backendApacheName1.dmz.example.com allow in_backendpool
cache_peer_access backendApacheName1.dmz.example.com deny all
cache_peer_access backendApacheName2.dmz.example.com allow in_backendpool
cache_peer_access backendApacheName2.dmz.example.com deny all

never_direct allow all
The never_direct line will ensure that Squid does not try to resolve the backendpool 
'host' keyword as if it was a real host name, to connect to it if all the peers are 
down. You may need a more sophisticated never_direct acl if you have some backend 
servers which are not presented as peers.
The configuration above assumes that the two backend zopes are providing http and ICP 
on port 8080. To use ICP you will need to enable it with the --icp command line 
switch, and you will need to some patches  for 
Zope versions before 2.6. Alternatively include the no-query directive in the 
cache_peer lines.
"


The part I don't understand is the redirection:  the page says:
To implement this solution your redirector script must output a URL where the hostname 
part of the URL is a keyword which describes a pool of backend servers, such as 
http://backendpool/VirtualHostBase/http/www.example.com:80/a/b/c Note that the 
hostname part of the URL is not a real host; it is a keyword that will be used in 
squid's configuration. 
I want to try to take advantage of that "sophisticated logic".   VirtualHostBase is a 
a Zope specific keyword. What should my redirectors return 
to call an apache backend specified by the backendpool or can it even be done?

Thank you,
John Kent



[squid-users] Straight Apache Server Faster Than Squid Accelerator?!!!

2004-07-13 Thread Kent, Mr. John \(Contractor\)
Greetings,

Using Squid as an accelerator in front of a seven machine server farm.
Each server is running a regular "light" Apache server on : and 
a mod-perl enable "heavy" server on :.

Just for fun, thought I'd compare the improvement I got from Squid
over the straight Apache when serving a static page and was dismayed to find that
the straight Apache server was   4 x Faster than squid!

Running :Squid Cache: Version 2.5.STABLE5

Tested using Apache BenchMark /bin/ab

Here is the straight Apache server
./ab -n 800 -c 100 http://europa.nrlmry.navy.mil:/tc_pages/tc_home.html
Requests per second:872.24 [#/sec] (mean)

Here is Squid's output:
./ab -n 800 -c 100 http://www.nrlmry.navy.mil/tc_pages/tc_home.html
Requests per second:226.61 [#/sec] (mean)

I suspect the reason for this is the complicated perl redirect script I'm using.
But was hoping someone could suggest something to speed things up.

My redirector first switches between the heavy or light servers depending on
if its a cgi script call or not then cycles between the servers in the pool to
achieve load balancing.

I once tried to add FastCGI to the perl redirectors, but did that ever ball things
up and I quickly abandaned it.

Below are my redirector.pl script and my Squid.conf file

Thank you for your assistance,

John Kent
Webmaster
Naval Research Laboratory
Monterey, CA


squid.conf:
# CONFIG FILE FOR WWW_SQUID

# Note recommend you read:
# http://theoryx5.uwinnipeg.ca/guide/scenario/Running_Two_webservers_and_Squid.html
# before touching this config file, john.

#199.9.2.108 => www-new.nrlmry.navy.mil
# 199.9.2.48 => www.nrlmry.navy.mil
# THIS MUST BE AN IP ADDRESS! www.nrlmry.navy.mil will fail!!

#http_port 199.9.2.136:80 199.9.2.137:80

# For kdc2
http_port 192.160.159.132:8080
icp_port 0
#tcp_outgoing_address 127.0.0.1

#httpd_accel_host 127.0.0.1
httpd_accel_host virtual
httpd_accel_port 
#httpd_accel_port 80

# NOTE: the RUDE_ROBOTS_IP line is automatically written
# by the rude_robots.pl script which writes the line
# then restarts Squid by running squid -k reconfigure
# acl aclname src  ip-address/netmask ... (clients IP address)
acl RUDE_IP src "/users/webuser/www_squid/dyn_conf/Rude_Robots_IP.txt"
#http_access deny RUDE_IP

hierarchy_stoplist /tc\_pages /cgi\-bin /sat\-bin /tc\-bin /focus\-bin /~ /goes\_cc 
/coamps\-reg

#   A list of words which, if found in a URL, cause the object to
#   be handled directly by this cache.  In other words, use this
#   to not query neighbor caches for certain objects.  You may
#   list this option multiple times.

# Since pages created dynamically by tc-bin and sat-bin have
# an expire time on them I DO want them cached - jk
#hierarchy_stoplist /cgi-bin /~ /goes\_cc /coamps\-reg

acl QUERY urlpath_regex  research coamps dev security menu\.txt common index focus 
dmso flambe adap sampson  THUMB\.jpg LATEST\.jpg Latest\.jpg swish dev \~ dev\-bin 
tc\-dev Mod\-dev training SAIC shared\-bin shared swish cgi\-bin sat\-dev goes\_cc cc 
composer coamps\-reg wusage  sys\-bin banner aerosol Case\_
no_cache deny QUERY

cache_mem  64 MB

# Switched to aufs "threaded" from ufs "non-threaded" suppposed to scale better
# on Linux. jk 29AUG03

#cache_dir diskd /cache 12000 16 256 Q1=72 Q2=64
cache_dir ufs /cache 12000 16 256
cache_access_log /users/webuser/www_squid/logs/access.log
cache_log /users/webuser/www_squid/logs/cache.log

emulate_httpd_log on

pid_filename /users/webuser/www_squid/logs/squid.pid

#debug_options ALL,1,28,9
#debug_options ALL,1

redirect_program /users/webuser/www_squid/dyn_conf/www_redirect.pl
#redirect_program /data/www/web/htdocs_dyn/squid/www_redirect.pl
redirect_children 32

# Cannot use this otion to accelerate multiple back-end servers!
#  TAG: redirect_rewrites_host_header
#   By default Squid rewrites any Host: header in redirected
#   requests.  If you are running a accelerator then this may
#   not be a wanted effect of a redirector.
#
#Default:
# redirect_rewrites_host_header on
redirect_rewrites_host_header on

acl acceleratedHost dst 199.9.2.134/255.255.255.255 199.9.2.135/255.255.255.255 
199.9.2.136/255.255.255.255 199.9.2.137/255.255.255.255 199.9.2.108/255.255.255.255 
199.9.2.48/255.255.255.255 199.9.2.69/255.255.255.255 199.9.2.33/255.255.255.255  
199.9.2.43/255.255.255.255 199.9.2.92/255.255.255.255 199.9.2.100/255.255.255.255 
199.9.2.101/255.255.255.255 199.9.2.102/255.255.255.255 199.9.2.103/255.255.255.255 
199.9.2.44/255.255.255.255 199.9.2.72/255.255.255.255 199.9.2.109/255.255.255.255 
199.9.2.110/255.255.255.255 199.9.2.111/255.255.255.255 199.9.2.126/255.255.255.255

acl ssl_noauth dstdomain io.nrlmry.navy.mil
acl acceleratedPort port  
acl myserver src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 81 3128   8080 81 443 563

RE: [squid-users] I only get TCP_MISS/200

2004-06-11 Thread Kent, Mr. John \(Contractor\)
Muthukumar,

Try running the pages you wish to see cached/"HIT" through
the cacheability tool.  I found my HITs went way up when I 
fixed problems with the pages and scripts found by this tool.

http://www.web-caching.com/cacheability.html

John Kent
Webmaster
Naval Research Laboratory
Monterey, CA

-Original Message-
From: dravya [mailto:[EMAIL PROTECTED]
Sent: Friday, June 11, 2004 7:41 AM
To: [EMAIL PROTECTED]
Subject: Re: [squid-users] I only get TCP_MISS/200




 Thanx Muthukumar for your reply, BUT

 I have already added the lines you mentioned.  
 
 http_access allow all
 icp_access allow all
 
 httpd_accel_host virtual 
 httpd_accel_port 80 
 httpd_accel_with_proxy on
 httpd_accel_uses_host_header on
 
 Could there be some other reason? Absolutely any suggestion would be greateful.
 
 Thank you 
 
 Dravya
 
 
 
> On Jun 11, "Muthukumar" <[EMAIL PROTECTED]> wrote:
> > 
> > 
> > > I have recently installed squid as a transparent proxy. I see every http request
that goes
> > > through squid but it doesn't cache anything. Well, all I see in the access.log is
> > > TCP_MISS/200. Any suggestions??
> > 
> > 
> > Are using the transparent squid (httpd_accel_host virtual httpd_accel_port 80 ) 
> > with the
> following settings,
> > 
> > httpd_accel_with_proxy on
> > httpd_accel_uses_host_header on
> > 
> > If you did not use this,then that is the problem.
> > 
> > Regards,
> > Muthukumar.
> > 
> > 
> > 
> > 
> > 
> > ---
> > ===  It is a "Virus Free Mail" ===
> > Checked by AVG anti-virus system ( href='http://www.grisoft.com;'>http://www.grisoft.com'>http://www.grisoft.com'>http://www.grisoft.com).
> > Version: 6.0.701 / Virus Database: 458 - Release Date: 6/7/2004
> > 
> > 
> 
> 




[squid-users] Squid and mod_expires

2004-06-09 Thread Kent, Mr. John \(Contractor\)
Greetings,

Writing to find out if the interaction between 
Squid and Apache with mod-expires is what I think it is
(that Squid will honor the Apache expires tags just as
a browser would).

Using Squid as an accelerator on top of an Apache server farm.

If I have a directory of images and for that directory
within my apache servers config file have set

   ExpiresActive On
   ExpiresDefault "access plus 15 minutes"

Does this mean:
The first time a client accesses an image in this directory
the image will remain in Squid's cache for 15 minutes, then
drop out.  It will be re-entered into the squid cache the 
next time a client accesses the image again, remaining in
cache for another 15 minutes then drop out of cache ... repeat forever?

As compared to:

   ExpiresActive On
   ExpiresDefault "modification plus 15 minutes"

Which I think will cause Squid to cache the image only
for the first 15 minutes of its existence and then never
be cached again regardless of subsequent hits.

No expires info means the image will be cached per Squid's rules.

Is this right or wrong?

Thank you,

John Kent
Webmaster
Naval Research Laboratory
Monterey, CA
http://www.nrlmry.navy.mil


RE: [squid-users] RE: Squid Accelerator and SSL, Unsupported method "L"

2004-02-16 Thread Kent, Mr. John (Contractor)
Henrik,

Thank you for responding.

Just to make sure I understand, did you mean to say that 
"You CAN only have one https_port directive"?

And that the problem is that I have BOTH
an http_port
and and https_port directive in my squid.conf file?

OR that I only have one https port directive?

If its the latter then Access https://199.9.2.137: should work
but instead I when I try it I get the following lines in cache.log:

2004/02/16 15:28:01| parseHttpRequest: Requestheader contains NULL characters
2004/02/16 15:28:01| clientParseRequestMethod: Unsupported method 'L'
2004/02/16 15:28:01| clientProcessRequest: Invalid Request


Also,  since one cannot redirect https:// URLS to an http port,
does this mean you CAN redirect https:// URL calls 
to a backend server listening on the https port 443?


Here are the first two lines from my squid.conf file


http_port 199.9.2.137: vport=
https_port 199.9.2.137:443 cert=/users/webuser/squid3.0/etc/ssl.crt/webcache2.crt 
key=/users/webuser/squid3.0/etc/ssl.key/webcache2.key

Thank you
John Kent



-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 15, 2004 3:08 PM
To: Kent, Mr. John (Contractor)
Cc: Duane Wessels; Henrik Nordstrom (E-mail); Squid_Users (E-mail)
Subject: RE: [squid-users] RE: Squid Accelerator and SSL, Unsupported
method "L"


On Fri, 13 Feb 2004, Kent, Mr. John (Contractor) wrote:

> Duane and Henrik,
> 
> Thank you both for responding.  I'm thinking that a glance at my
> config file will reveal the problem to you so here it is:
> 
> What I'm trying to do is run Squid on port  for testing,
> have it accelerate servers listening to port  and also
> be able to redirect 443 requests, with SSL authentication being
> handled by Squid.

You only have one https_port directive

https_port 199.9.2.137:443 ...

so the only https:// URLs this Squid will accept is URLs directed to this address:port.

You can not direct https:// URLs to a http_port (at least not in a
reverse-proxy/accelerator, Internet web proxying is a completely different
story).

Regards
Henrik



RE: [squid-users] Low hit rate

2004-02-15 Thread Kent, Mr. John (Contractor)
Kemi,

I increased my hit ratio by running pages and script output
through a cacheability tool and taking corrective action as
required.  The main thing was to add mod_expires and mod_headers to 
my servers.

http://www.cacheflow.com/technology/tools/friendly/cacheability/index.cfm

John Kent
Webmaster
Naval Research Laboratory
Monterey, CA


-Original Message-
From: Duane Wessels [mailto:[EMAIL PROTECTED]
Sent: Saturday, February 14, 2004 4:47 PM
To: Kemi Salam-Alada
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] Low hit rate





On Sat, 14 Feb 2004, Kemi Salam-Alada wrote:

> Hi all,
>
> How can I tune my squid so that I can generate high hit rate?  Presently, I
> am running squid using FreeBSD 4.3 OS and Squid 2.5 STABLE2.
> The file system used for the disk is aufs.

See the 'refersh_pattern' directive in squid.conf.
You can probably increase your hit ratio by increasing
the values of the refresh_pattern line(s).

Duane W.


RE: [squid-users] RE: Squid Accelerator and SSL, Unsupported method "L"

2004-02-13 Thread Kent, Mr. John (Contractor)
Duane and Henrik,

Thank you both for responding.  I'm thinking that a glance at my
config file will reveal the problem to you so here it is:

What I'm trying to do is run Squid on port  for testing,
have it accelerate servers listening to port  and also
be able to redirect 443 requests, with SSL authentication being
handled by Squid.

John Kent
Webmaster
Naval Research Laboratory
Monterey, California
http://www.nrlmry.navy.mil

#

http_port 199.9.2.137: vport=
https_port 199.9.2.137:443 cert=/users/webuser/squid3.0/etc/ssl.crt/webcache2.crt 
key=/users/webuser/squid3.0/etc/ssl.key/webcache2.key

sslproxy_flags DONT_VERIFY_PEER
icp_port 0

acl RUDE_IP src "/users/webuser/www_squid/dyn_conf/Rude_Robots_IP.txt"
http_access deny RUDE_IP


hierarchy_stoplist /tc\_pages /cgi\-bin /sat\-bin /tc\-bin /focus\-bin /~ /goes\_cc 
/coamps\-reg

acl QUERY urlpath_regex  sat_products nrlonly focus dmso tc_home2 flambe adap bacimo 
tc_home\.html proddemo researchproj agenda headlines sampson pubs aboutdivision 
fleet_apps home_30 subfoot THUMB\.jpg LATEST\.jpg Latest\.jpg swish dev \~ dev\-bin 
tc\-dev Mod\-dev training SAIC shared\-bin shared swish cgi\-bin sat\-dev goes\_cc cc 
composer coamps\-reg wusage  sys\-bin banner aerosol Case\_
no_cache deny QUERY

cache_mem 8 MB

cache_dir diskd /users/webuser/squid3.0/var/cache 12000 16 256 Q1=72 Q2=64

emulate_httpd_log on

redirect_program /users/webuser/squid3.0/dyn_conf/ssl_redirect.pl
redirect_children 10

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

#Suggested default:
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

acl acceleratedHost dst 199.9.2.134/255.255.255.255 199.9.2.135/255.255.255.255 
199.9.2.136/255.255.255.255 199.9.2.137/255.255.255.255 199.9.2.108/255.255.255.255 
199.9.2.48/255.255.255.255 199.9.2.69/255.255.255.255 199.9.2.33/255.255.255.255  
199.9.2.43/255.255.255.255 199.9.2.92/255.255.255.255 199.9.2.100/255.255.255.255 
199.9.2.101/255.255.255.255 199.9.2.102/255.255.255.255 199.9.2.103/255.255.255.255 
199.9.2.44/255.255.255.255 199.9.2.72/255.255.255.255 199.9.2.109/255.255.255.255 
199.9.2.110/255.255.255.255 199.9.2.111/255.255.255.255 199.9.2.126/255.255.255.255

acl ssl_noauth dstdomain io.nrlmry.navy.mil
acl acceleratedPort port  
acl myserver src 127.0.0.1/255.255.255.255


acl manager proto cache_object
#acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl Methods method GET POST HEAD

# Cachemgr related acl's
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl example src 199.9.2.136/255.255.255.255
acl example src 199.9.2.137/255.255.255.255
acl all src 0.0.0.0/0.0.0.0
http_access allow manager localhost
http_access allow manager example
http_access deny manager
http_access allow all

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow acceleratedHost acceleratedPort
http_access allow Methods

http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

acl local-servers dstdomain nrlmry.navy.mil
always_direct allow all

# And finally deny all other access to this proxy
http_access deny all

http_reply_access allow all

#Default:
cache_effective_user webuser
cache_effective_group webgroup

logfile_rotate 30

# strip_query_terms on
strip_query_terms off

###
-Original Message-
From: Duane Wessels [mailto:[EMAIL PROTECTED]
Sent: Friday, February 13, 2004 1:03 PM
To: Kent, Mr. John (Contractor)
Cc: Squid_Users (E-mail)
Subject: Re: [squid-users] RE: Squid Accelerator and SSL, Unsupported
method "L"





On Fri, 13 Feb 2004, Kent, Mr. John (Contractor) wrote:

> Greetings,
>
> Setting up Squid3.0 as an accelerator that needs to handle SSL.
>
> As you recommended Henrik:
> Un-encrypted my key.  Modified key and cert permissions.
> No longer get FATAL: Bungled squid.conf error. ! Good.
>
> For testing running Squid on port 
> That works fine.
>
> But w

[squid-users] RE: Squid Accelerator and SSL, Unsupported method "L"

2004-02-13 Thread Kent, Mr. John (Contractor)
Greetings,

Setting up Squid3.0 as an accelerator that needs to handle SSL.

As you recommended Henrik:
Un-encrypted my key.  Modified key and cert permissions.
No longer get FATAL: Bungled squid.conf error. ! Good.

For testing running Squid on port 
That works fine.

But when I attempt to access   https://...: 
I get nothing.

The cache log shows:

2004/02/13 10:36:37| clientProcessRequest: Invalid Request
2004/02/13 10:36:46| parseHttpRequest: Requestheader contains NULL characters
2004/02/13 10:36:46| clientParseRequestMethod: Unsupported method 'L'   
<Bad!
2004/02/13 10:36:46| clientProcessRequest: Invalid Request

Appreciate any suggestions.

Thank you,
John Kent

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Monday, February 09, 2004 4:31 PM
To: Kent, Mr. John (Contractor)
Cc: Henrik Nordstrom; Squid_Users (E-mail)
Subject: RE: Squid Accelerator and SSL


On Mon, 9 Feb 2004, Kent, Mr. John (Contractor) wrote:

> Henrik and Brian,
> 
> As recommended, I created certificates and keys for my
> Squid server  using openssl
> 
> Created certificate:
> >openssl genrsa -des3 -out webcache2.key 1024

This generates an encryted RSA key of 1024 bits. Squid can not load 
encrypted RSA keys unless you start it with the -N option. Recommend to 
decrypt the key unless you actually want to have to enter the encryption 
key manually each time Squid is restarted.

> Created CSR:
> >openssl req -new -key webcache2.key -out webcache2.csr
> 
> Then Signed it:
> >openssl x509 -req -days 3650 -in webcache2.csr -signkey webcache2.key -out 
> >webcache2.crt

This generates the certificate.

> Modified my squid.conf file by adding the following line
> https_port 199.9.2.137:443 cert=/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt 
> key=/users/webuser/squid
> 3.0/conf/ssl.key/webcache2.key
> 
> When I go to start Squid get:
> bash-2.05$ ./squid  
> 2004/02/09 15:14:51| Failed to acquire SSL certificate 
> '/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt': error:02001002:system 
> library:fopen:No such file or directory
> FATAL: Bungled squid.conf line 135: https_port 199.9.2.137:443 
> cert=/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt 
> key=/users/webuser/squid3.0/conf/ssl.key/webcache2.key

The error indiates that /users/webuser/squid3.0/conf/ssl.crt/webcache2.crt 
does not exists, or maybe that Squid does not have permission to enter the 
directory.

Maybe more information is given if you start Squid with the -X flag.

Regards
Henrik



RE: [squid-users] Squid Performance Analysis

2004-02-13 Thread Kent, Mr. John (Contractor)
Greetings,

I use Wusage and have found that it accidentally creates the desired reports for me.
I let it parse my Squid access.log vice the webservers logs.
The following are the unintended but desirable
results:

Clicking on "Top referring URLS" gives:

Give for yesterday's stats.
 
Rank Referring Page Accesses % Bytes % 
1 tcp_denied:none  141,395 51.54 213,718,096 6.45 
2 tcp_miss:direct  83,507 30.44 2,370,314,745 71.56 
3 tcp_mem_hit:none  19,326 7.04 9,636,699 0.29 
4 tcp_hit:none  17,372 6.33 517,886,678 15.64 
5 tcp_ims_hit:none  7,686 2.80 3,489,863 0.11 
6 tcp_refresh_hit:direct  2,748 1.00 142,606,780 4.31 
7 tcp_refresh_miss:direct  1,651 0.60 43,327,915 1.31 
8 tcp_client_refresh_miss:direct  568 0.21 11,204,824 0.34 
9 tcp_negative_hit:none  87 0.03 52,771 0.00 
10 tcp_miss:none  19 0.01 12,524 0.00 


Thus I have a good handle on my hits and misses.

Wusage home page is at:  
http://www.boutell.com/wusage/

John Kent
Webmaster
Naval Research Laboratory
Monterey, CA

-Original Message-
From: Jay Turner [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 12, 2004 5:26 PM
To: Merton Campbell Crockett; Squid Users List
Subject: RE: [squid-users] Squid Performance Analysis


> Is there something that analyzes the various "*_HIT" statuses in the log
> and produces a "what might have been report"?  Does anyone know of any
> tools that are not listed on the Squid Cache web site that would provide
> this type of report?

Your requirements sound like you are looking for a cache reporting tool.

Have you tried Calamaris?
It can provide information like the following:

Incoming TCP-requests by status
status request % Byte % sec kB/sec 
   
HIT 1488651 37.68 5481382K 21.96 0 9.74 
  TCP_IMS_HIT 486076 12.30 139571K 0.56 0 7.82 
  TCP_REFRESH_HIT 413379 10.46 1626804K 6.52 0 4.87 
  TCP_MEM_HIT 280950 7.11 492567K 1.97 0 41.14 
  TCP_HIT 223217 5.65 3122269K 12.51 0 16.05 
  TCP_NEGATIVE_HIT 85029 2.15 100170K 0.40 0 24.26 
MISS 2435997 61.65 19010M 77.99 2 3.53 
  TCP_MISS 2206700 55.85 18375M 75.39 2 3.50 
  TCP_CLIENT_REFRESH_MISS 184121 4.66 369832K 1.48 0 5.17 
  TCP_REFRESH_MISS 45138 1.14 279813K 1.12 1 4.22 
  TCP_SWAPFAIL_MISS 38 0.00 19094 0.00 0 3.97 
ERROR 26514 0.67 11954009 0.05 70 0.01 
  TCP_MISS 22614 0.57 10625538 0.04 78 0.01 
  TCP_REFRESH_MISS 2685 0.07 0 0.00 37 0.00 
  NONE 901 0.02 1140085 0.00 0 41.78 
  TCP_DENIED 159 0.00 182942 0.00 0 42.76 
  TCP_CLIENT_REFRESH_MISS 155 0.00 5444 0.00 8 0.00 
   
Sum 3951162   24374M   2 3.14 

But formatted nicer via a web interface..

http://cord.de/tools/squid/calamaris/Welcome.html

Regards
Jay

 




[squid-users] RE: Squid Accelerator and SSL

2004-02-09 Thread Kent, Mr. John (Contractor)
Henrik and Brian,

As recommended, I created certificates and keys for my
Squid server  using openssl

Created certificate:
>openssl genrsa -des3 -out webcache2.key 1024

Created CSR:
>openssl req -new -key webcache2.key -out webcache2.csr

Then Signed it:
>openssl x509 -req -days 3650 -in webcache2.csr -signkey webcache2.key -out 
>webcache2.crt


Modified my squid.conf file by adding the following line
https_port 199.9.2.137:443 cert=/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt 
key=/users/webuser/squid
3.0/conf/ssl.key/webcache2.key

When I go to start Squid get:
bash-2.05$ ./squid  
2004/02/09 15:14:51| Failed to acquire SSL certificate 
'/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt': error:02001002:system 
library:fopen:No such file or directory
FATAL: Bungled squid.conf line 135: https_port 199.9.2.137:443 
cert=/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt 
key=/users/webuser/squid3.0/conf/ssl.key/webcache2.key
Squid Cache (Version 3.0-PRE3): Terminated abnormally.
CPU Usage: 0.020 seconds = 0.020 user + 0.000 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 429
Aborted

Did a google search and found that Henrik had recommended to someone who reported the
same problem the following:

>If you use encrypted RSA keys then you must start Squid with the -N 
option

So tried:
bash-2.05$ ./squid -N
2004/02/09 15:16:34| Failed to acquire SSL certificate 
'/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt': error:02001002:system 
library:fopen:No such file or directory
FATAL: Bungled squid.conf line 135: https_port 199.9.2.137:443 
cert=/users/webuser/squid3.0/conf/ssl.crt/webcache2.crt 
key=/users/webuser/squid3.0/conf/ssl.key/webcache2.key
Squid Cache (Version 3.0-PRE3): Terminated abnormally.
CPU Usage: 0.010 seconds = 0.010 user + 0.000 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 429
Aborted

And also

bash-2.05$ ./squid -v
Squid Cache: Version 3.0-PRE3
configure options: '--prefix=/users/webuser/squid3.0' '--enable-storeio=diskd,ufs' 
'--enable-ssl' '--with-openssl=/usr/lib'

I noticed that in the default squid.conf file it talks about
cert=certificate.pem [key=key.pem]

Does the fact that my keys and certificates end in .key cause the failure?

Thank you,
John Kent
Webmaster
Naval Research Laboratory
Monterey, CA

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Friday, February 06, 2004 7:32 PM
To: Kent, Mr. John (Contractor)
Cc: Squid_Users (E-mail)
Subject: RE: Squid Accelerator and SSL


On Fri, 6 Feb 2004, Kent, Mr. John (Contractor) wrote:

> The problem I now have is that the accelerator works perfectly and hides
> the fact that the client is connecting to an https server.  

You should set up Squid as an https reverse proxy. See the https_port 
directive.

Regards
Hernik



[squid-users] RE: Squid Accelerator and SSL

2004-02-06 Thread Kent, Mr. John (Contractor)
Greetings,

I downloaded and installed Squid3.0 and it works!

I can redirect to a backend server running https and the
web pages come up fine.

The problem I now have is that the accelerator works perfectly and hides
the fact that the client is connecting to an https server.  

Somehow I don't think that's what I want.

Is there a way to hide all redirections from the clients browser's except those
going to an https server?

Doesn't the Client need to "see" https in the URL in order to securely transmit a 
password for instance?

I guess the only way to handle this is to have a hyperlink on a page directly to 
the https server and bypass Squid altogether.

If this shows a gross ignorance of the process, I confess.
Perhaps someone can set me straight.

Thank you,
John Kent


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Friday, February 06, 2004 9:44 AM
To: Kent, Mr. John (Contractor)
Cc: Squid_Users (E-mail); Henrik Nordstrom (E-mail)
Subject: Re: Squid Accelerator and SSL


Squid-2.5.STABLE can not initiate SSL connections, only accept SSL 
connections.

To initiate SSL connections you need the SSL update patch from
devel.squid-cache.org, or Squid-3.

Regards
Henrik

On Fri, 6 Feb 2004, Kent, Mr. John (Contractor) wrote:

> 
> Greetings,
> 
> I am using Squid as a front-end accelerator on top of a server farm.
> Wanted to re-direct to an https enabled Apache Server.
> Squid is in a "DMZ" and talks to the server farm through a firewall.
> The Apache server was set up independently of Squid, by which I mean
> I created the keys and certificates for it only.
> 
> It works fine when accessed directly.
> 
> Per the FAQ, I rebuilt my Squid enabling ssl
> 
> ./squid -v  now gives =3D
> >Squid Cache: Version 2.5.STABLE4
> configure options:  --prefix=3D/users/webuser/www_squid =
> --enable-storeio=3Ddiskd,ufs --enable-ssl --with-openssl=3D/usr/lib
> 
> When the redirection occurs get the following error page from Squid:
> 
> ERROR
> The requested URL could not be retrieved
> 
> While trying to retrieve the URL: =
> <https://xxl>=20
> The following error was encountered:=20
> * Unsupported Request Method and Protocol=20
> Squid does not support all request methods for all access protocols. For =
> example, you can not POST a Gopher request.=20
> 
> Clicking on the "trying to retrieve" URL above works fine.
> 
> Any suggestions?
> 
> Obviously I'm missing a great deal here.
> If there is more information that I have failed to read, I accept all 
> criticism, but would appreciate the link to
> the applicable reference.
> 
> Thank you,
> 
> John Kent
> Webmaster
> Naval Research Laboratory
> Monterey, CA
> http://www.nrlmry.navy.mil
> 
> 
> 



[squid-users] Squid Accelerator and SSL

2004-02-06 Thread Kent, Mr. John (Contractor)

Greetings,

I am using Squid as a front-end accelerator on top of a server farm.
Wanted to re-direct to an https enabled Apache Server.
Squid is in a "DMZ" and talks to the server farm through a firewall.
The Apache server was set up independently of Squid, by which I mean
I created the keys and certificates for it only.

It works fine when accessed directly.

Per the FAQ, I rebuilt my Squid enabling ssl

./squid -v  now gives =3D
>Squid Cache: Version 2.5.STABLE4
configure options:  --prefix=3D/users/webuser/www_squid =
--enable-storeio=3Ddiskd,ufs --enable-ssl --with-openssl=3D/usr/lib

When the redirection occurs get the following error page from Squid:

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: =
=20
The following error was encountered:=20
*   Unsupported Request Method and Protocol=20
Squid does not support all request methods for all access protocols. For =
example, you can not POST a Gopher request.=20

Clicking on the "trying to retrieve" URL above works fine.

Any suggestions?

Obviously I'm missing a great deal here.
If there is more information that I have failed to read, I accept all 
criticism, but would appreciate the link to
the applicable reference.

Thank you,

John Kent
Webmaster
Naval Research Laboratory
Monterey, CA
http://www.nrlmry.navy.mil





[squid-users] Problem with Tomcat Authentication behind Accelerator

2003-10-08 Thread Kent, Mr. John
Greetings,

Using Squid as an accelerator/redirector in front of a diverse
collection of machines and webservers/applications.

All works well until a client clicks on a particular application
which calls a JavaServerPage requiring authentication.
Tomcat generates the authentication page fine for the client

The problem is that the once the Login and Password are submitted.
The page created by Tomcat forces the submission to bypass our
main url, which Squid listens to, and instead tries to send it directly to the backend
machine which Tomcat is running on.

Problem is outside users are blocked from directly accessing the backend servers.

So is this something I can correct by modifying my Squid config file
or redirector OR is this something that should be directed to a Tomcat board?

Thank you,

John Kent


[squid-users] Debug Settings

2003-09-19 Thread Kent, Mr. John
Greetings,

Love Squid!  It helped serve forecasts and photos on hurricane Isabel.

Trying to eek out better performance and so want to use the cachemger.cgi.
So far no luck.

I think my question is:

What would be the recommended debug_options settings to see the IP# Squid is refusing 
to
make a connection for?

The background:  trying to use cachemger.cgi.  When I call it, after the login 
password page
get   
Cache Manager Error
connect: (111) Connection refused 

Generated Fri, 19 Sep 2003 14:51:00 GMT, by cachemgr.cgi/[EMAIL PROTECTED] 


I have set my config file in accordance with the FAQ:

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl example src 199.9.2.136/255.255.255.255
acl example src 199.9.2.137/255.255.255.255
acl all src 0.0.0.0/0.0.0.0
http_access allow manager localhost
http_access allow manager example
http_access deny manager
http_access allow all

Thank you for your help,

John Kent
Webmaster
Naval Research Laboratory
Monterey, CA
http://www.nrlmry.navy.mil/tc_pages/tc_home.html





RE: [squid-users] Help!! No TCP_HIT in access.log

2003-09-11 Thread Kent, Mr. John
Paul,

I use Squid as an accelerator.

Something that helped me dramatically improve my HIT ratio was to run my pages
through the "Cacheability Engine Query" at 
http://www.web-caching.com/cacheability.html

It showed me that I needed to add mod_expires among other things
to my Apache web server.

If the engine says your page isn't cacheable then its my experience that Squid won't 
cache it.

John Kent


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 10, 2003 7:18 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] Help!! No TCP_HIT in access.log


Hi all!
This is my last question about this problem. I cant find any suggestions about this 
problem via Google search. I call several times the same site there are no TCP_HITS in 
the logs. I have not TCP_DENIED entries.What is wrong or what could it be?? I use 
squid 2.5stable3.
Here is a peace of my config:
#-
# NETWORK OPTIONS
#--
http_port 3128
#https_port
#ssl_unclean_shutdown
#icp_port
#htcp_port
#mcast_groups
#udp_incoming_address
#udp_outgoing_address
#-
# LOGFILE PATHNAMES AND CACHE DIRECTORIES
#-
cache_dir aufs /var/squid/cache 128 16 256
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
#cache_swap_log
#emulate_httpd_log
#log_ip_on_direct
#mime_table /etc/squid/mime.conf
#log_mime_hdrs on
#useragent_log
#referer_log
#pid_filename
#debug_options ALL,1 78,9
#log_fqdn
#client_netmask
#-
# ADMINISTRATIVE PARAMETERS
#-
cache_mgr [EMAIL PROTECTED]
cache_effective_user squid
cache_effective_group squid
visible_hostname Mask
#unique_hostname
#hostname_aliases
#-
# OPTIONS WHICH AFFECT THE CACHE SIZE
#-
cache_mem 16 MB
#cache_swap_low
#cache_swap_high
maximum_object_size 1024 KB
maximum_object_size_in_memory 20 KB
minimum_object_size 0 KB
#ipcache_size
#pcache_low
#ipcache_high
#qdncache_size
#cache_replacement_policy
#memory_replacement_policy
#-
#Für Tuning des Cache
#-
#request_header_max_size 5 KB
#request_body_max_size 0 KB
#refresh_pattern  ftp: 1440  20%  10080
#refresh_pattern  .   480  20%  1440
quick_abort_min 16 KB
quick_abort_max 16 KB
quick_abort_pct 96
negative_ttl 5 minutes
positive_dns_ttl 360 minutes
negative_dns_ttl 5 minutes
range_offset_limit 100 KB
#-
#Timeouts
#-
connect_timeout 120 seconds
#peer_connect_timeou
#read_tiemout
#request_timeout
#persistent_request_timeout
client_lifetime 8 hours
half_closed_clients on
#pconn_timeout
#ident_timeout
#-
# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
#-
ftp_user Squid@
ftp_list_width 32
#ftp_passive on
ftp_sanitycheck on
#pinger_program /bin/ping



RE: [squid-users] Trouble Building with aufs

2003-09-10 Thread Kent, Mr. John
Henrik,

Deleting the source tree and re-installing did the trick.

Thank you,
John Kent

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 10, 2003 5:23 AM
To: Kent, Mr. John; Squid_Users (E-mail)
Subject: Re: [squid-users] Trouble Building with aufs


On Wednesday 10 September 2003 13.52, Kent, Mr. John wrote:

> gcc -DHAVE_CONFIG_H -I. -I. -I../include -I../include -I../include 
>   -g -O2 -Wall -c `test -f Array.c || echo './'`Array.c
> .deps/Array.TPo: Permission denied
>
> Would appreciate any suggestions.

The permissions in your source directory seems to be screwed up..

Try the following:

   make distclean

   ./configure ...

   make

If it still fails, delete the source tree and unpack the sources 
again.

Regards
Henrik

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


[squid-users] Trouble Building with aufs

2003-09-10 Thread Kent, Mr. John
Greetings,

Have been running Squid-2.5STABLE3 successfully on Linux with ufs.
Wanted to see if we could get some performance improvements by using aufs.

Rebuilt perl with threads enabled.

Configured:
>./configure --prefix=/usr/local/squid --enable-pthreads --enable-store-io=aufs

Compiling gives the following error:

Making all in lib
make[1]: Entering directory `/users/webuser/src/squid-2.5.STABLE3/lib'
source='Array.c' object='Array.o' libtool=no \
depfile='.deps/Array.Po' tmpdepfile='.deps/Array.TPo' \
depmode=gcc3 /bin/sh ../cfgaux/depcomp \
gcc -DHAVE_CONFIG_H -I. -I. -I../include -I../include -I../include-g -O2 -Wall -c 
`test -f Array.c || echo './'`Array.c
.deps/Array.TPo: Permission denied
make[1]: *** [Array.o] Error 1
make[1]: Leaving directory `/users/webuser/src/squid-2.5.STABLE3/lib'
make: *** [all-recursive] Error 1
bash-2.05$ 

Would appreciate any suggestions.

Thank you,
John Kent


[squid-users] Optimum Number of Redirectors

2003-09-09 Thread Kent, Mr. John
Greetings,

How do I determine the optimum number of redirector processes to use?
The FAQ says:

Caution
If you start too few Squid will have to wait for them to process a back log of URLs, 
slowing it down. If you start too many they will use RAM and other system resources.

So what is too few or too many?  Is it just trial and error?

I'm  running Squid2.5 Stable3 on Linux/Dell 350 with 1GB or Ram
Using Perl redirectors.

Is there some ballpark figure i.e. between 10 - 32 , etc.

Thank you,
John Kent


[squid-users] Intermittent Caching During Benchmark Testing

2003-06-16 Thread Kent, Mr. John
Greetings,

Running Squid-2.5.STABLE3 on Linux as an accelerator with 32 redirector
processes.

Tested it using Apache Bench calling the Squid server
with its DNS name, it worked perfectly and viewing the access.log
saw that every hit was from the cache TCP_HIT:NONE  overall Requests Per
Second
was 25.

I am planning on running multiple Squid Accelerators behind DNS round-robin.
So need to access the Squid server using its IP number.  So I ran the
following test 
./ab -n 800 -c 200 http://199.9.2.135:8080/tc_pages/tc_home.html

This time the access.log shows only intermittent hits:

199.9.2.65 - - [16/Jun/2003:13:09:22 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32336 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:22 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32336 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:22 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32464 TCP_HIT:NONE
199.9.2.65 - - [16/Jun/2003:13:09:22 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32336 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:22 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32464 TCP_HIT:NONE
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32336 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32336 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32336 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
32464 TCP_HIT:NONE
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
11103 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
12195 TCP_HIT:NONE
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
12313 TCP_HIT:NONE
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
20671 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
26151 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
15199 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 -0700] "GET
http://calamari.nrlmry.navy.mil:/tc_pages/tc_home.html HTTP/1.0" 200
4149 TCP_MISS:DIRECT
199.9.2.65 - - [16/Jun/2003:13:09:23 

Overall Requests per second RPS was half the previous result now only 12.

So my question is, is there some reason why I get intermittent cache hits
from the same page when accessing Squid with the IP number vs. DNS name?
Is there something I can change in my config file to improve the situation?
My config file is below
Is this some artifact of the benchmarking test?

Thank you very much,

Mr. John Kent
Naval Research Laboratory
Monterey, CA


  # CONFIG FILE FOR WWW_SQUID

# THIS MUST BE AN IP ADDRESS! www.nrlmry.navy.mil will fail!!
http_port 199.9.2.135:8080
icp_port 0
httpd_accel_host virtual
httpd_accel_port 
#httpd_accel_port 80

# NOTE: the RUDE_ROBOTS_IP line is automatically written
# by the rude_robots.pl script which writes the line
# then restarts Squid by running squid -k reconfigure
# acl aclname src  ip-address/netmask ... (clients IP address)
acl RUDE_IP src "/data/www/web/htdocs_dyn/squid/etc/Rude_Robots_IP.txt"
http_access deny RUDE_IP

hierarchy_stoplist /tc\_pages /cgi\-bin /sat\-bin /tc\-bin /focus\-bin /~
/goes\_cc /coamps\-reg

#   A list of words which, if found in a URL, cause the object to
#   be handled directly by this cache.  In other words, use this
#   to not query neighbor caches for certain objects.  You may
#   list this option multiple times.

# Since pages created dynamically by tc-bin and sat-bin have
# an expire time on them I DO want them cached - jk
#hierarchy_stoplist /cgi-bin /~ /goes\_cc /coamps\-reg

acl QUERY urlpath_regex  THUMB\.jpg LATEST\.jpg Latest\.jpg swish dev \~
dev\-bin tc\-dev Mod\-dev training SAIC shared\-bin
cgi\-bin sat\-dev goes\_cc cc composer coamps\-reg wusage  sys\-bin banner
aerosol Case\_
no_cache deny QUERY

cache_mem  32 MB

cache_dir ufs /users/webuser/www_squid/cache 100 16 256
cache_access_log /users/webuser/www_squid/logs/access.log
cache_log /users/webuser/www_squid/logs/cache.log

emulate_httpd_log on

pid_filename /users/webuser/www_squid/logs/squid.pid

#debug_options ALL,1,28,9

redirect_program /data/www/web/htdocs_dyn/squid/www_redirect.pl
#red