Re: [squid-users] squid performance epoll. 350req/sec 100% cpu

2006-03-29 Thread Stefan Neufeind
Ralf Hildebrandt wrote:
 * Stefan Neufeind [EMAIL PROTECTED]:
 
 Try compiling with the epoll-patch. Made an enormous
 performance-improvement here. System-time is wasted in the
 connection-handling (polling/select) - which is more efficient with
 epoll.
 
 Call me a retard, but where can I find this patch?

Hi Ralf,

see here:
http://devel.squid-cache.org/projects.html

I can also offer you a SRPM of the original Fedora FC4-release with
epoll already integrated (needed a little tweaking). If you are
interested, just let me know. Since you've still got my ICQ-contact
afaik, just msg me :-)


Regards,
 Stefan


Re: [squid-users] squid performance epoll. 350req/sec 100% cpu

2006-03-27 Thread Stefan Neufeind
Michal Mihalik wrote:
 Hello. 
   I am tring to optimize squid for best possible performance. 
   it is in production and it's doing more than 350req/sec. At peaks upto
 500req/sec. 
  
   My problem is only one.  100% cpu.  :-)  
  
   I tried to update my debian to 2.6.16 and recompiled squid:  
  
 Squid Cache: Version 2.5.STABLE12
 configure options:  --prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin
 --sbindir=/usr/sbin --libexecdir=/usr/lib/squid --sysconfdir=/etc/squid
 --localstatedir=/var/spool/squid --datadir=/usr/share/squid
 --enable-async-io --with-pthreads --enable-storeio=ufs,aufs,diskd,null
 --enable-linux-netfilter --enable-arp-acl --enable-removal-policies=lru,heap
 --enable-snmp --enable-delay-pools --enable-htcp --enable-epoll
 --enable-cache-digests --enable-underscores --enable-referer-log
 --enable-useragent-log --enable-auth=basic,digest,ntlm --enable-carp
 --with-large-files i386-debian-linux 
  
 the thing I realy don't like is 25% cpu + 50% system cpu .   
  
  why the 50% system!
 can anyone help?

Try compiling with the epoll-patch. Made an enormous
performance-improvement here. System-time is wasted in the
connection-handling (polling/select) - which is more efficient with epoll.


Regards,
 Stefan


Re: [squid-users] Non-cached pages with squid 2.5-stable13

2006-03-20 Thread Stefan Neufeind
Mark Elsen wrote:
 Hi Mark,

 thank you for that tool. It reports:


 http://[...]/cache/setHeader.php/getTheList
 Expires 35 sec from now   (Sun, 19 Mar 2006 10:49:01 GMT)
 Cache-Control -
 Last-Modified   25 sec ago  (Sun, 19 Mar 2006 10:48:01 GMT) 
 validation
 returned same object
 ETag  -
 Content-Length  6.3K (6438)
 Server  Apache


 This object will be fresh for 35 sec. It has a validator present, but
 when a conditional request was made with it, the same object was sent
 anyway.


 I've read that squid might not be caching it, since the expires is less
 than 60 seconds in the future. Is that true? But even rasing the limit
 returned (currently 60 seoncds, maybe raising it to 120) does not work
 for me.

 What does the validation returned same object mean and why is it
 printed in read?

 
  Not sure, what is returned for this object with :
 
  http://web-sniffer.net
 
  ?

Hmm - does not show anything special either. But during tries to track
this down it showed that most of the time I received TCP_MEM_HIT, but
sometimes a row of TCP_REFRESH_MISS occured for a duration of 10 or 20
seconds (1 request per sec) out of no obvious reason and only sporadic.
It seems this occured with and without the collapsed forwarding-patch.
I'll try to find a good way to reliable diagnose it before continuing
this thread.

Thank you for your help so far.


Regards,
 Stefan


Re: [squid-users] Non-cached pages with squid 2.5-stable13

2006-03-19 Thread Stefan Neufeind
Mark Elsen wrote:
 On 3/19/06, Stefan Neufeind [EMAIL PROTECTED] wrote:
 Hi,

 at the moment I did try to run a squid 2.5-stable13 from Fedora Core 4,
 handpatched with collapsed-forwarding-support and epoll. Those two
 additional features work quite well. But currently I experience some
 pages which unfortunately are not cached by squid.
 ...
 ...
 
   http://www.ircache.net/cgi-bin/cacheability.py

Hi Mark,

thank you for that tool. It reports:


http://[...]/cache/setHeader.php/getTheList
Expires 35 sec from now   (Sun, 19 Mar 2006 10:49:01 GMT)
Cache-Control -
Last-Modified   25 sec ago  (Sun, 19 Mar 2006 10:48:01 GMT) validation
returned same object
ETag  -
Content-Length  6.3K (6438)
Server  Apache


This object will be fresh for 35 sec. It has a validator present, but
when a conditional request was made with it, the same object was sent
anyway.


I've read that squid might not be caching it, since the expires is less
than 60 seconds in the future. Is that true? But even rasing the limit
returned (currently 60 seoncds, maybe raising it to 120) does not work
for me.

What does the validation returned same object mean and why is it
printed in read?


Regards,
 Stefan


[squid-users] Non-cached pages with squid 2.5-stable13

2006-03-18 Thread Stefan Neufeind

Hi,

at the moment I did try to run a squid 2.5-stable13 from Fedora Core 4,
handpatched with collapsed-forwarding-support and epoll. Those two
additional features work quite well. But currently I experience some
pages which unfortunately are not cached by squid. I wonder why - and
wonder if it might have to do with vary-headers the webserver is sending.

A called script returns:

Date: ... (current date)
Server: Aapche
Expires: ... (like date, approx 2min in the future)
Last-Modified: ... (shortly before Date)
Vary: Accept-Encoding
Content-Length: ...
Connection: close
Content-Type: text/html

The Vary-header is used to deliver gzip-compressed or non-compressed
content (compressed inside php) to the clients which do/don't support it.

Though I _think_ everything should be fine upon each request to squid
for this object squid includes an If-Modified-Since in it's request
which is already more than 2 hours in the past - might be the time when
squid was started and/or first tried to cache a copy of the page.

Both the squid and the webserver are in sync. Is there a possibility why
squid does not cache the content, and why it might be using an IMS that
far back in the past? Static content is cached fine - but that does not
include Vary-headers or Expires. I've seen notes from (afaik) squid
2.5-stable11 that pages with vary-headers are now cached. Could this
be related that in some special cases they are not yet?

By the way: The squid is running in httpd_accel mode with proxy, in
front of several webservers (which are in sync) defined via cache_peer.


Any hints to track this down would be welcome!


Yours sincerely,
  Stefan Neufeind




[squid-users] Performance-problems on reverse-proxy squid

2005-03-12 Thread Stefan Neufeind
Dear all,

I'm running a squid-proxy (squid 2.5.stable7-1) in reverse-proxy mode in
front of two webservers. Squid does equal loadbalancing across the
servers, and answers requests for static pages/images/... itself.
Because of the site-content squid is able to service about 80%-85% of
the requests itself. Statistics report about 500 requests/second hitting
squid, with an output to the internet of about 20mbit/s during peak-times.

The squid-machine has an Intel Pentium IV 3,2GHz with two Intel
Pro/1000-networkadapters (e1000-driver) on-board (one for outbound, one
for internal network to the webservers). The system is running Fedora
Core 3 with latest 2.6.x-kernel from Fedora.
There are 2GB RAM available, with 1024MB allowed for squid, about 500MB
of mem are used for filesystem-buffers, about 200MB used for kernel-buffer.
Access to the harddisks is about 15 i/o writes per second, and a bit
less read-i/o each.

My problem is that during peak-hours the machine runs with very high
cpu-utilisation. About 65% show up as being used by system.
Unfortunately yet I wasn't able to isolate the bottleneck. I also used
InterruptThrottleRate of the network-adapters to limit their interrupts
and increased the Tx/RxDescriptors - without any change to cpu-utilisation.

Does somebody have an idea how I could debug the cpu-utilisation of
system, and how to lower it? Friends told me to watch out for possible
buffers that could reduce the number of transfers from
kernel-to-userspace or so - but I didn't find much.


Feedback/help is _very_ much appreciated.


Yours sincerely,
 Stefan Neufeind