[squid-users] squid with 16384 descriptors still running out.

2005-04-26 Thread Ray Charles

Hi,

I haven't posted in a while but I am seeing an old
problem reoccur. 

I am getting the "Your Cache is Running out of
FileDescriptors"  error again.

My platform is RHEL, Stock RHEL kernel, 3.1Gz P4, 1
gig of RAM. Squid stable9.


Any danger in doubling the FD again? to 32768 ??

Thanks-

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] Testing cacheability of objects

2005-03-06 Thread Ray Charles

Hi,

I've been finding some docs online that suggest that
URLS that contain ? are not cacheable.  I think that
this was once, perhaps long ago, the default behavior
but is it still the case?? If so what does one do to
get Squid to chache an object where the url contains a
?

Anyone know if there's a good test for cacheability of
objects?? I found this  one..
http://www.ircache.net/cgi-bin/cacheability.py

But it may be too old to be considered usefull.


Thanks-




__ 
Celebrate Yahoo!'s 10th Birthday! 
Yahoo! Netrospective: 100 Moments of the Web 
http://birthday.yahoo.com/netrospective/


[squid-users] ideas for getting this to work with squid

2005-03-04 Thread Ray Charles
Hi,


My setup is RHEL-3.1 and
Squid-2.5-STABLE9+collapsed_forwarding turned on.


I am trying to get squid to work in accelerator mode
to provide relief to a backend server. The clients are
set up so they make requests to the squid box but if
squid doesn't have the object in cache, clients go to
my origin server,which is running tomcat.  This can
mean several hundred clients going to the origin
server at the same time for the same object. A sad
situation for the origin server.

I would expect that once the object is received from
the origin server it gets cached, but I am seeing that
its not being cached.  I've verified that the object
is cachable. When several requests arrive at the same
time I can live with one or two requests going to the
origin server, but I ve got to stop each and every one
of the requests from continuing to pour into the
origin server.  How do i get squid to do that??

-any help / pointers / directions to a cliff or bridge
appreciated.

-RAy



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] help with accelerator mode

2005-03-01 Thread Ray Charles

Hi,

I have some lingering questions from last week that
are, well, still with me. So I am hoping to hear from
anyone with experience running squid in accelerator
mode, having the collapsed_forwarding would be great
too.


As in my previous posts my setup is RHEL-3.1 and
Squid-2.5-STABLE9+collapsed_forwarding turned on.


I am trying to get squid to work in accelerator mode
to provide relief to a backend server. The clients are
set up to point to the squid box but when the cache
doesn't have the object in cache, clients go to my
origin server.  This can mean several hundred clients
going to the origin server at the same time for the
same object. A sad situation for the origin server.

Once the object is received by the origin server its
not being cached.  I've verified that the object is
cachable. When several requests arrive at the same
time I can live with one or two going to the origin
server but I ve got to stop each and every one of the
requests to continue poring into the origin server.
How do i get squid to do that??


Thanks for any help,

-RAy







__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] Acclerator mode

2005-02-25 Thread Ray Charles

Hi all,

Using RHEL-3.1 x86, Squid-STABLE9 + Collapsed_Forward
patch.

I am try to set up squid as a surrogate to my back end
server which has an http service running on port
44003. What I' am seeing when multiple requests for
the same URI comes in that rather than allow just one
or two of the requests to go to the backend.  So,
using a separate box, I setup multiple wgets (say 20)
to request the same URI ... each request initiates a
session to the back end. :(  

Any thoughts about what is wrong with my test or the
config or hints about what folks usually goof up when
deploying squid as in Accelerator mode.

Thanks in advance for any help/suggestions,

Ray


Here is my squid.conf file
###
###
###
http_port 32011
http_port 21100
http_port 44003
httpd_accel_host mybackendexampleserver.com
httpd_accel_port 0
httpd_accel_uses_host_header on
httpd_accel_with_proxy on

#
#
icp_port 0
cache_mem 128 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 1024 MB
#maximum_object_size_in_memory 100 MB
maximum_object_size_in_memory 8 MB
ipcache_size 1024
ipcache_low 90
ipcache_high 95
cache_replacement_policy lru
memory_replacement_policy lru
#
#
cache_dir ufs /opt/squid/cache 16384 16 256
cache_access_log /opt/log/squid/access.log
cache_log /opt/log/squid/cache.log
mime_table /opt/squid/etc/mime.conf
pid_filename /var/run/squid.pid
##refresh_pattern . 0 20% 10080
refresh_pattern . 10 20% 10080
#
#
#redirect_rewrites_host_header off
#Tried but no differnce
#
collapsed_forwarding on
refresh_stale_hit 0 seconds

acl ALL src 0/0
#acl ProxyUsers src 192.168.10.0/24
#acl ProxyUsers src 192.168.0.0/24
acl ProxyUsers src 192.168.0.0/17
acl TheOriginServer dstdomain
mybackendexampleserver.com

http_access allow ProxyUsers
http_access allow TheOriginServer
http_access deny ALL

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80
acl Safe_ports port 443 563
acl Safe_ports port 1025-65535
acl CONNECT method CONNECT
acl snmp snmp_community COMTYPASS
acl dstbackenddom dstdomain .backenddom.com
acl backend-svc dstdomain .backend-svc.com
acl dstpriv dstdomain .backendsvc-priv.com
http_access allow localhost
http_access allow manager localhost
http_access allow dstbackenddom
http_access allow dstbackend-svc 
http_access allow dstpriv
http_access allow snmp !localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all
cache_mgr [EMAIL PROTECTED]
cache_effective_user squid
cache_effective_group squid
logfile_rotate 10
cachemgr_passwd PASSWD shutdown
cachemgr_passwd PASSWD info stats/objects
cachemgr_passwd disable all
icon_directory /opt/squid/share/icons
error_directory /opt/squid/share/errors/English
snmp_port 3401
snmp_access allow snmp
snmp_access deny all
uri_whitespace strip
strip_query_terms off
cache_store_log /opt/log/squid/storage.log
no_cache deny CONNECT
emulate_httpd_log on










__ 
Do you Yahoo!? 
Take Yahoo! Mail with you! Get it on your mobile phone. 
http://mobile.yahoo.com/maildemo 


[squid-users] Understanding the stats.

2005-02-22 Thread Ray Charles


Hi,


Looking at some stats from Cache Manager. I originally
thought there were just hits or misses, is a Near Hit
still a miss? I am hoping for a deeper explaination of
Near Hits. Also, can you guys tell me, but doesn't
this look like a poorly performing squid server?? It's
for approx 30Hours.

Connection information for squid:
Number of clients accessing cache:  5384
Number of HTTP requests received:  
7124911
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start: 
 3641.7
Average ICP messages per minute since start:  
 0.0
Select loop called: 26095903 times, 4.498 ms
avg
Cache information for squid:
Request Hit Ratios: 5min: 99.1%, 60min:
98.5%
Byte Hit Ratios:5min: 99.0%, 60min:
98.8%
Request Memory Hit Ratios:  5min: 78.4%,
60min: 91.2%
Request Disk Hit Ratios:5min: 4.5%,
60min: 4.1%
Storage Swap size:  802820 KB
Storage Mem size:   90464 KB
Mean Object Size:   609.12 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.00678  0.00379
Cache Misses:  0.03241  0.04776
Cache Hits:0.00463  0.00379
Near Hits:169.11253 169.11253
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.05078
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:117388.930 seconds
CPU Time:   54245.630 seconds
CPU Usage:  46.21%
CPU Usage, 5 minute avg:42.93%
CPU Usage, 60 minute avg:   38.88%
Process Data Segment Size via sbrk(): 179290
KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 52592






__ 
Do you Yahoo!? 
Yahoo! Mail - 250MB free storage. Do more. Manage less. 
http://info.mail.yahoo.com/mail_250


RE: [squid-users] What did I do to make squid behace this way??

2005-02-19 Thread Ray Charles
> Does it work when the collapse forwarding patch is
>not used ?
>
>M


The behavior occurs even in a plan vanilla install of
squid. So regardless of the patch.

Thanks-





__ 
Do you Yahoo!? 
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com 


[squid-users] What did I do to make squid behace this way??

2005-02-18 Thread Ray Charles
Hi,

My setup-
squid-stable8-20050218 w/collapsed_forwarding patch.
RHEL-3.1 on 3.6Ghz w/1Gig of RAM. SW Raid.

I've been trying to understand how squid works for the
past week or so and I'am looking for what causes the
following behavior when squid receives a request to
fetch a URI not associated with my network:


Squid stops w/SEGV message

2005/02/18 20:54:34| storeLateRelease: released 0
objects
FATAL: Received Segment Violation...dying.
2005/02/18 21:10:56| storeDirWriteCleanLogs:
Starting...
2005/02/18 21:10:56| WARNING: Closing open FD   12


Any insights or pointers much appreciatied,

Ray




__ 
Do you Yahoo!? 
Yahoo! Mail - Find what you need with new enhanced search.
http://info.mail.yahoo.com/mail_250


[squid-users] build no longer fails after system upgrade

2005-02-17 Thread Ray Charles

Hi,

My original post on this matter lacked critical
infosorry bout that. 

I've been able to find where my mistake was, it was
not a problem with the patch/or my update. 


Thanks everyone !



__ 
Do you Yahoo!? 
Yahoo! Mail - 250MB free storage. Do more. Manage less. 
http://info.mail.yahoo.com/mail_250


[squid-users] build failing after sys upgrade

2005-02-16 Thread Ray Charles
Hi,

I am sure that my problem is a direct result of a
recent system update that ran yesterday. I kind of
thought my kerberos needed updating but doing so
didn't make a difference.

A vinilla squid build works but when I apply the patch
for collapsed_forwarding I get the following errors:

gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\"/opt/squid/etc/squid.conf\"
-I. -I. -I../include -I. -I. -I../include -I../include
 -I/usr/kerberos/include -g -O2 -Wall -D_REENTRANT
-c `test -f client_side.c || echo './'`client_side.c
client_side.c: In function `clientProcessExpired':
client_side.c:437: `squid' undeclared (first use in
this function)
client_side.c:437: (Each undeclared identifier is
reported only once
client_side.c:437: for each function it appears in.)
client_side.c:437: too many decimal points in floating
constant
client_side.c:455: too many decimal points in floating
constant
client_side.c: In function `clientCacheHit':
client_side.c:1637: `squid' undeclared (first use in
this function)
client_side.c:1637: too many decimal points in
floating constant
client_side.c: In function `clientProcessMiss':
client_side.c:2548: `squid' undeclared (first use in
this function)
client_side.c:2548: too many decimal points in
floating constant
make[3]: *** [client_side.o] Error 1


Looking for suggestions..



__ 
Do you Yahoo!? 
Take Yahoo! Mail with you! Get it on your mobile phone. 
http://mobile.yahoo.com/maildemo 


[squid-users] Increasing the File Descriptors

2005-02-10 Thread Ray Charles

I am getting errors about running out of FD and I
checked d the FAQ and found a couple pointers for
doing a ulimit before running ./config and editing the
/usr/include/bit/types.h file.  I just wanted to ask
if I should also increase the FD_SETSIZE ??  Probably
not, but here is that part of ./config's output.

checking Default FD_SETSIZE value... 1024
checking Maximum number of filedescriptors we can
open... 16384

Sorry if this seems like an obvious question but
What caching situation calls for a bigger FD_SIZE ??


Thanks-
Ray


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] Is this the correct patch to use?

2005-02-07 Thread Ray Charles


Hi,

I am running squid-2.5.stable-7 on RHEL-3. I am seeing
multiple requests for the same web object and its
creating a bottle-neck. The original request appears
to never be cachable.  After some reading and googling
I'am  thinking that the collapse_forwarding patch will
help me. What else should I check?? 

-- Thanks in advance



__ 
Do you Yahoo!? 
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com