Re: [squid-users] high load issues

2010-02-12 Thread Justin Lintz
On Wed, Feb 10, 2010 at 4:23 PM, Amos Jeffries  wrote:

>> http_access allow localhost
>> http_access allow all
>
> Why?

Sorry I should mention this is running in a reverse proxy setup

>
> So what is the request/second load on Squid?
> Is RAID involved?

The underlying disks are running in a RAID 1 configuration.  Each
server is seeing around 170 rec/sec during peak traffic

>
> You only have 4GB of storage. Thats just a little bit above trivial for
> Squid.
>
> With 4GB of RAM cache and 4GB of disk cache, I'd raise the maximum object
> size a bit. or at least remove the maximum in-memory object size. It's
> forcibly pushing half the objects to disk, when there is just as much space
> in RAM to hold them.
>
> Amos

Would this only be the case for a forward proxy?  I'd say probably
less than 1% of our objects are anywhere near the memory limit.
Thanks for the reply


Re: [squid-users] high load issues

2010-02-10 Thread Justin Lintz
>
> dont top list,
>
> we have seveal heavy load squids, and we realized that sometimes inet surf is
> slow, we've discovered that  it is because IO (as you see in your top command
> more than 1% of IO waiting), so we purge our cache to dont let it rise
> cache_swap_high percentage very often
>

The iowait time is more than 1%, at times between 20-50%.  We've tried
purging the cache a few times but that only appears to give temporary
relief to the issue.  I was looking to tune our configuration more
before ruling out the need for more caching servers or looking into
faster disks for just the cache.


Re: [squid-users] high load issues

2010-02-10 Thread Justin Lintz
We're seeing the symptoms across 4 servers on different hardware.
What would be the reason for adjusting the cache_swap_high to 96?
Thanks

- Justin Lintz



On Wed, Feb 10, 2010 at 11:45 AM, Luis Daniel Lucio Quiroz
 wrote:
> Le Mercredi 10 Février 2010 10:36:40, Justin Lintz a écrit :
>> Squid ver: squid-2.6.STABLE21-3
>> The server is a xen virtual with 6GB of ram available to it.
>>
>> relevant lines in Squid.conf:
>>
>> ierarchy_stoplist cgi-bin ?
>> acl apache rep_header Server ^Apache
>> broken_vary_encoding allow apache
>> cache_mem 4096 MB
>> maximum_object_size 8192 KB
>> maximum_object_size_in_memory 4096 KB
>> cache_swap_low 95
>> cache_swap_high 96
>> cache_dir aufs /www/apps/squid/var/cache 4096 16 256
>> logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %> "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh %tr
>> access_log /www/logs/squid/access.log combined
>>  cache_log /www/logs/squid/cache.log
>>  cache_store_log /www/logs/squid/store.log
>> debug_options ALL,1 33,2
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
>> refresh_pattern .               0       20%     4320
>> negative_ttl 0
>> collapsed_forwarding on
>> refresh_stale_hit 5 seconds
>> half_closed_clients off
>> acl all src 0.0.0.0/0.0.0.0
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl Safe_ports port 80          # http
>> acl Safe_ports port 21          # ftp
>> acl Safe_ports port 443         # https
>> acl Safe_ports port 70          # gopher
>> acl Safe_ports port 210         # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280         # http-mgmt
>> acl Safe_ports port 488         # gss-http
>> acl Safe_ports port 591         # filemaker
>> acl Safe_ports port 777         # multiling http
>> acl CONNECT method CONNECT
>> acl PURGE method PURGE
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny PURGE
>> http_access allow localhost
>> http_access allow all
>> http_reply_access allow all
>> icp_access allow all
>> httpd_suppress_version_string on
>> cachemgr_passwd none config
>> error_directory /www/apps/squid/errors
>> coredump_dir /var/spool/squid
>> minimum_expiry_time 15 seconds
>> max_filedesc 8192
>>
>> Symptoms:
>> - High load avg on box ranging from 6-10 during traffic hours
>> - CPU iowait time during times will be between 20-50%
>> - SO_FAIL status codes seen in store.log
>>  - MaintainSwapSpace is continually running under a second. This
>> appears to be normal though looking at our dev and stage squid setups
>> which have no load.
>>  - From squidaio_counts, seeing the Queue spike upwards to 200 or
>> more.  I saw a mention in the O'Reilly book this number if greater
>> than 5x # of IO threads, then squid is overworked.
>> - Cache_dir storage size is constantly at the cache_swap_low value
>> (94%).  Does this mean squid is continually garbage collecting and
>> possibly causing the high IO?  Originally we had the number at 90, but
>> after reading some threads, adjusted the number to 94 for the low and
>> 95 for the high hoping to reduce IO with smaller amount of data being
>> garbage collected.  This change didn't have any impact
>> - Saw a couple of warnings in cache.log saying
>> "squidaio_queue_request: WARNING - Disk I/O overloading"
>> - High number of create.select_fail events in store_io screen in the
>> cache manager.  Seeing this number at 12% of the total IO calls.
>>
>> From reading around the list of people with similar issues,  I see one
>> suggestion we will implement next will be configuring a second
>> cache_dir to increase the number of threads available for IO.
>>
>> I wanted to know if you had any other suggestions for tweaks that
>> could be made that would hopefully alleviate the load on the box.
>>
>> A couple of other tweaks we have currently implemented are putting the
>> noatime option on the partition where the cache is stored and using
>> tcmalloc inplace of gnu malloc.
>>
>> I saw a recommendation of changing the store_dir_select_algorithm to
>> round-robin but from reading this
>> http://www.squid-cache.org/mail-archive/squid-users/200011/0794.html
>> it sounded like the change would increase the response times.
>>
>>
>>
>>
>> - Justin Lintz
> Change your
> cache_swap_high 96
>
> to something higher, 98 could be.
> look for hardware errors
>


[squid-users] high load issues

2010-02-10 Thread Justin Lintz
Squid ver: squid-2.6.STABLE21-3
The server is a xen virtual with 6GB of ram available to it.

relevant lines in Squid.conf:

ierarchy_stoplist cgi-bin ?
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 4096 MB
maximum_object_size 8192 KB
maximum_object_size_in_memory 4096 KB
cache_swap_low 95
cache_swap_high 96
cache_dir aufs /www/apps/squid/var/cache 4096 16 256
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh %tr
access_log /www/logs/squid/access.log combined
 cache_log /www/logs/squid/cache.log
 cache_store_log /www/logs/squid/store.log
debug_options ALL,1 33,2
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
negative_ttl 0
collapsed_forwarding on
refresh_stale_hit 5 seconds
half_closed_clients off
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl PURGE method PURGE
http_access allow manager localhost
http_access deny manager
http_access deny PURGE
http_access allow localhost
http_access allow all
http_reply_access allow all
icp_access allow all
httpd_suppress_version_string on
cachemgr_passwd none config
error_directory /www/apps/squid/errors
coredump_dir /var/spool/squid
minimum_expiry_time 15 seconds
max_filedesc 8192

Symptoms:
- High load avg on box ranging from 6-10 during traffic hours
- CPU iowait time during times will be between 20-50%
- SO_FAIL status codes seen in store.log
 - MaintainSwapSpace is continually running under a second. This
appears to be normal though looking at our dev and stage squid setups
which have no load.
 - From squidaio_counts, seeing the Queue spike upwards to 200 or
more.  I saw a mention in the O'Reilly book this number if greater
than 5x # of IO threads, then squid is overworked.
- Cache_dir storage size is constantly at the cache_swap_low value
(94%).  Does this mean squid is continually garbage collecting and
possibly causing the high IO?  Originally we had the number at 90, but
after reading some threads, adjusted the number to 94 for the low and
95 for the high hoping to reduce IO with smaller amount of data being
garbage collected.  This change didn't have any impact
- Saw a couple of warnings in cache.log saying
"squidaio_queue_request: WARNING - Disk I/O overloading"
- High number of create.select_fail events in store_io screen in the
cache manager.  Seeing this number at 12% of the total IO calls.

>From reading around the list of people with similar issues,  I see one
suggestion we will implement next will be configuring a second
cache_dir to increase the number of threads available for IO.

I wanted to know if you had any other suggestions for tweaks that
could be made that would hopefully alleviate the load on the box.

A couple of other tweaks we have currently implemented are putting the
noatime option on the partition where the cache is stored and using
tcmalloc inplace of gnu malloc.

I saw a recommendation of changing the store_dir_select_algorithm to
round-robin but from reading this
http://www.squid-cache.org/mail-archive/squid-users/200011/0794.html
it sounded like the change would increase the response times.




- Justin Lintz


Re: [squid-users] CPU spikes, heap fragmentation, memory_pools_limit question

2009-03-19 Thread Justin Lintz
Ben,

I ran into the same issue as well.  This was solved by using google's
tcmalloc http://goog-perftools.sourceforge.net/.  You can either link
statically against it or load it dynamically as a drop in replacement
for malloc.  Our cpu usage dropped significantly after it was
installed.


- Justin Lintz


Re: [squid-users] Configuring reverse proxy for both 80/443

2008-03-05 Thread Justin Lintz
Nick,

Try creating a seperate dstdomain acl for the ssl.insiderserver.com
and allow that for your cache_peer_access for the ssl connection

- Justin

On Wed, Mar 5, 2008 at 11:35 AM, Nick Duda <[EMAIL PROTECTED]> wrote:
> Still not working properly. Here is what my configuration looks like,
> followed by what it is doing:
>
> http_port 80 defaultsite=www.insideserver.com vhost
> https_port 443 cert=/path/to/cert/example.crt
> key=/path/to/key/example.key defaultsite=ssl.insideserver.com vhost
> #
> acl example_sites dstdomain www.insideserver.com ssl.insiderserver.com
> acl example_ssl proto HTTPS
> #
> cache_peer 192.168.0.10 parent 443 0 no-query originserver ssl
> name=example_ssl
> cache_peer_access example_ssl allow example_sites example_ssl
> #
> cache_peer 192.168.0.10 parent 1080 0 no-query originserver
> name=example_http
> cache_peer_access example_http allow example_sites
>
>
> I setup an entry in my host file:
> 68.x.x.x. www.insiderserver.com
>
> I open IE and browse to www.insiderserver.com and it works, no problem
> I browse to ssl.inisdeserver.com which is the same server as
> www.insideserver.com but requires SSL to connect and IE just
> spinsthinking over and over. I look at the access.log on the proxy
> and over and over it keeps trying to make a connection, but its saying
> example_http even though im trying for the SSL version
>
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> FIRST_UP_PARENT/example_http text/html
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> ANY_PARENT/example_http text/html
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> FIRST_UP_PARENT/example_http text/html
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> ANY_PARENT/example_http text/html
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> FIRST_UP_PARENT/example_http text/html
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> ANY_PARENT/example_http text/html
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> FIRST_UP_PARENT/example_http text/html
> TCP_MISS/302 574 GET https://ssl.insideserver.com -
> ANY_PARENT/example_http text/html
>
>
>
>
> -Original Message-
> From: Anthony Tonns [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 05, 2008 10:44 AM
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] Configuring reverse proxy for both 80/443
>
> You want something like this:
>
> http_port 80 defaultsite=www.example.com vhost
> https_port 443 cert=example.crt key=example.key
> defaultsite=www.example.com vhost
> #
> acl example_sites dstdomain www.example.com example.com
> acl example_ssl proto HTTPS
> #
> cache_peer 127.0.0.1 parent 1443 0 no-query originserver ssl
> name=example_ssl
> cache_peer_access example_ssl allow example_sites example_ssl
> #
> cache_peer 127.0.0.1 parent 1080 0 no-query originserver
> name=example_http
> cache_peer_access example_http allow example_sites
>
> > -Original Message-
> > From: Nick Duda [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, March 04, 2008 5:11 PM
> > To: squid-users@squid-cache.org
> > Subject: [squid-users] Configuring reverse proxy for both 80/443
> >
> > I seem to be stumped. I need to reverse proxy for one internal server
> > that listens on both 80 and 443. How can I configure squid to proxy
> for
> > the same cache-peer on both 80 and 443? As far as I can see you can
> only
> > specify one protocol per cache-peer line. I think I am missing
> > something.
> >
> > - Nick
>



-- 
- Justin Lintz