[squid-users] Squid Optimization

2013-06-11 Thread Matthew Ceroni
We are running squid as our primary proxy here at my office.

What I am noticing is that connectivity is fine but every now and then
the browser sits with "Sending request". If I hope on the proxy and
view the access log I don't see it logging my request. After a few
seconds, sometimes as many as 10 - 15, the request comes through.

My thought process is that the request is getting queued. I did a
little research about maximum concurrent connections but all I came
across was how to limit a specific user to max concurrent connections.

Prior to deploying squid I read through some performance and
optimization guides. I increased the open file handles to 8192 and am
currently monitoring those to see if they run out (but usually don't
get above 1600 or so in use).

I am more familiar with Apache in which you can specify how many
children to spawn, workers, etc to handle requests. Is there something
similar with squid? The machine we are running squid on has 4 cores
and the load barely breaks 0.25.

Running this on CentOS which officially only has 3.1.10 in their
repositories. I tried installing the latest stable version and
configure the workers. However I have never been able to get squid to
start-up with that enabled. Always says it can't bind to port: no such
file or directory. Without going to the latest version that supports
the workers option, is there something in squid where you can specify
how many process to spawn, or how many concurrent connections a squid
process can handle (queue depth)?

Thanks in advance


Re: [squid-users] Squid Optimization

2013-06-11 Thread Matthew Ceroni
Hey:

Thanks for the response.

I just tried moving to 3.2.8-1 from this repo:
http://www2.ngtech.co.il/rpm/centos/6/$basearch

I chosen 3.2.8 instead of the 3.3.5 because I looked at the init
script and noticed a few errors with it. And I was primarily upgrading
to get access to the workers configuration option, which is available
in 3.2. I was also able to fix the issue I had getting that running.
Turns out the RPM package didn't create a necessary run directory
(/var/run/squid) for the socket files. Once I created that it started
up just fine.

So my hope is this provides better performance and utilizes the SMP
capabilities of our proxy host. Was my thinking correct in that in
version 3.1 that requests would get queued up and that is why there
was a latency in processing the request? For 3.1 how many concurrent
requests could the squid daemon handle at one point?

Kind of a different topic, but I am having trouble getting the nolog
option to work. In version 3.1 I simply had like this:

.jira.com
.concursolutions.com
.typesafe.com
.maven.org
.surveymonkey.com
.salesforce.com
.webex.com

Saved it as nolog.txt and added the following to squid.conf:

acl NoLogSites url_regex -i "/etc/squid/nolog.txt"
log_access deny NoLogSites

However, when I use that now it doesn't log anything.

Thanks again

On Tue, Jun 11, 2013 at 3:58 PM, Eliezer Croitoru  wrote:
> On 6/12/2013 12:43 AM, Matthew Ceroni wrote:
>>
>> We are running squid as our primary proxy here at my office.
>>
>> What I am noticing is that connectivity is fine but every now and then
>> the browser sits with "Sending request". If I hope on the proxy and
>> view the access log I don't see it logging my request. After a few
>> seconds, sometimes as many as 10 - 15, the request comes through.
>>
>> My thought process is that the request is getting queued. I did a
>> little research about maximum concurrent connections but all I came
>> across was how to limit a specific user to max concurrent connections.
>>
>> Prior to deploying squid I read through some performance and
>> optimization guides. I increased the open file handles to 8192 and am
>> currently monitoring those to see if they run out (but usually don't
>> get above 1600 or so in use).
>>
>> I am more familiar with Apache in which you can specify how many
>> children to spawn, workers, etc to handle requests. Is there something
>> similar with squid? The machine we are running squid on has 4 cores
>> and the load barely breaks 0.25.
>>
>> Running this on CentOS which officially only has 3.1.10 in their
>> repositories. I tried installing the latest stable version and
>> configure the workers. However I have never been able to get squid to
>> start-up with that enabled. Always says it can't bind to port: no such
>> file or directory. Without going to the latest version that supports
>> the workers option, is there something in squid where you can specify
>> how many process to spawn, or how many concurrent connections a squid
>> process can handle (queue depth)?
>>
>> Thanks in advance
>>
> There is some small bug related to SHM which can be fixed.
> What RPM have you tried on your CentOS server? or maybe self compiled?
> if you have used my RPM there is a small bug in the init script that will be
> fixed in the next release of 3.3.6 hopefully.
> if you can share the info on the bug We can might help you solve it.
>
> Eliezer


Re: [squid-users] Squid Optimization

2013-06-12 Thread Matthew Ceroni
Thanks.

But is there a config setting (prior to the workers option) that
controlled how many concurrent connections squid could handle? Using
the Apache example again where you configure Childs,
MaxThreadsPerChild, etc. Is there anything like that in squid?

On Tue, Jun 11, 2013 at 6:36 PM, Marcus Kool
 wrote:
>
>
> On 06/11/2013 06:43 PM, Matthew Ceroni wrote:
>>
>> We are running squid as our primary proxy here at my office.
>>
>> What I am noticing is that connectivity is fine but every now and then
>> the browser sits with "Sending request". If I hope on the proxy and
>> view the access log I don't see it logging my request. After a few
>> seconds, sometimes as many as 10 - 15, the request comes through.
>>
>> My thought process is that the request is getting queued. I did a
>> little research about maximum concurrent connections but all I came
>> across was how to limit a specific user to max concurrent connections.
>
>
> There is too little information to make a conclusion but
> one should note that the HTTP command is only logged _after_ it has
> finished.
> So if you upload something large or if the webserver needs 15 seconds
> to process the request before it replies with a "thank you"
> it is normal to see the HTTP command only after 10-15 seconds in the log
> file.
>
> In case you still have doubts, you need to raise debug levels
> to find out what Squid is doing with the HTTP request.
>
> Marcus


[squid-users] log_access

2013-06-24 Thread Matthew Ceroni
I am trying to prevent certain requests from being logged to the access log.

Have the following configuration snippet:

acl NoLogSites url_regex -i "/etc/squid/nolog.txt"
log_access deny NoLogSites

Within /etc/squid/nolog.txt:

.jira.com
.concursolutions.com
.typesafe.com
.maven.org
.surveymonkey.com
.salesforce.com
.webex.com

However, when I have this it ends up logging nothing. I even tried
escaping the . by using \. but same thing. With this directive in
nothing gets logged to access.log. Remove it and everything gets
logged.

Running on CentOS 6.4 with squid 3.1.10


[squid-users] Exchange WebServices (EWS)

2013-08-20 Thread Matthew Ceroni
Hi:

Let me start by saying I am not an Exchange expert by any means.
However we have two different network segments. One allows direct
access outbound without having to go through squid (used only for a
few select devices/users). The other needs to go through squid for
outbound services.

When on the segment that has to go through squid, Exchange Web Sevices
does not work. But when on the other segment (that doesn't need squid)
it works. Therefore I can only assume that squid is somehow blocking
or breaking the connection.

In checking the access log I do not see any DENIED messages for that
connection. In the googling I did it seems to indicate that EWS does
RPC over HTTPs. Is there a configuration in SQUID that has to be done
to allow this?

Thanks


[squid-users] Upgrading squid and cache

2013-08-28 Thread Matthew Ceroni
Hi:

I recently tried to update to the latest squid (3.8.1) from (3.1.1
that is available in CentOS). On secondary proxy servers that had no
cache, upgrading went fine.

However on the main physical proxy that has cache drives configured
squid said it started up (port was listening) but would not service
any requests. I noted in the cache.log file a reference to bug 3441
and something about rebuilding the cache.

Is there an upgrade process I have to do on the cache? Again, squid
appeared to be listening but did not service any requets.


Re: [squid-users] Upgrading squid and cache

2013-08-28 Thread Matthew Ceroni
Amos:

While this upgrade is going on is squid prevented from servicing
requests? Cause that is the behavior I was seeing. Then if I tried to
do a service squid stop it would never stop. Ended up having to kill
the processes.

On Wed, Aug 28, 2013 at 6:53 PM, Amos Jeffries  wrote:
> On 29/08/2013 10:38 a.m., Matthew Ceroni wrote:
>>
>> Hi:
>>
>> I recently tried to update to the latest squid (3.8.1) from (3.1.1
>> that is available in CentOS). On secondary proxy servers that had no
>> cache, upgrading went fine.
>>
>> However on the main physical proxy that has cache drives configured
>> squid said it started up (port was listening) but would not service
>> any requests. I noted in the cache.log file a reference to bug 3441
>> and something about rebuilding the cache.
>>
>> Is there an upgrade process I have to do on the cache? Again, squid
>> appeared to be listening but did not service any requets.
>
>
> Squid is doing the cache format upgrade automatically. The process is a disk
> scan identical to if the swap.state journal was corrupted or missing. It may
> take some time if you have large cache_dir.
>
> As a workaround you can run squid as a proxy-only for the production traffic
> (like your secondary one) and run another with a dummy config file to do the
> cache rebuild "offline". Then swap back to normal configuration with cache
> when the rebuild has finished.
>
> Amos
>


[squid-users] Slow response times on certain websites

2013-08-29 Thread Matthew Ceroni
Hi:

I am seeing a peculiar symptom of slow responses times for certain websites.

On my production proxy server (which has a large on disk cache and in
memory cache) if I navigate to www.wellsfargo.com and click on
Comericial and then login, the login page takes upwards of 60 seconds
to display. Tailing the access logs shows this as the total response
time is 60k ms.

As I was using CentOS and it only has 3.1 in the official repositories
I spun up a test proxy and upgraded to the latest squid 3.8.1. I then
tested going to the same website. Everything appeared great. The
WellsFargo site loaded up instantly.

Therefore I upgraded my physical production proxy to 3.8.1 and hoped
to see the same thing. Unfortunately that did not happen. I still saw
60 seconds response times.

I then went back to my test proxy and tried to test again. While point
to the test proxy the page loaded just fine. However the peculiar
thing is in the access log I wasn't see a log message logged right
away. Then after 60 seconds or so I would see

29/Aug/2013:09:39:22 -0700  41553 192.168.2.123 TCP_MISS/200 23273
CONNECT wellsoffice.wellsfargo.com:443
HIER_DIRECT/wellsoffice.wellsfargo.com -

So while the page loaded instantly (I even cleared the browser cache,
restarted squid so the memory cache was cleared, there is no disk
cache on my test proxy) squid reported that it took 41 seconds to
service.

Thanks


[squid-users] Unable to download WebEx recordings

2013-10-08 Thread Matthew Ceroni
It appears that the ability to download WebEx recordings is impacted
by squid. In which way I don't really know.

All I know is that bypassing squid the downloads complete without
issue. However when proxying through squid the download speeds will
jump from high to low and eventually the transmission stops prior to
completing the file (in this case a 64 MB file; never had issues
downloading large ISO's before).

There is nothing logged in the access log either. From the firewall
perspective I see Reset flags.

Any info on additional ways to troubleshoot would be appreciated.

I am running 3.3.8 on CentOS 6.

Thanks


[squid-users] Download Issues

2013-10-21 Thread Matthew Ceroni
When downloading files from certain websites, my end users experience
intermittent issues.

On one site, WebEx, where you can download recordings, transmission
speeds start off high and then gradually go to zero and the
transmission dies. Researching this issue came across the TCP Window
Scale option. I verified that my firewall (Cisco ASA running latest
ASA software) is set to allow window scaling (it doesn't zero out the
options). The thing with this issue is I don't see anything logged to
the access log. No start of transmission or end of transmission.

Another user reported that they were receiving HTTP 500 errors when
trying to download artifacts for Artifactory (build server). When
looking at the access.log I see TCP_MISS/500 errors. Now usually HTTP
500 errors are server side errors. So I indicated this to my user
saying the server they are downloading from is returning the 500
error. They didn't agree and as a temporary work around I allowed
direct access for that server out to the internet. They now say the
issue is resolved (don't really agree with them at this point).

What is the best route to go in debugging download issues?

Thanks


[squid-users] File Descriptor Issue

2014-06-24 Thread Matthew Ceroni
Hi:

I am running squid version 3.3.8 on CentOS v6.5.

I have an issue regarding file descriptors. Our proxy has run into an
issue the past few days (everyone streaming FIFA soccer I believe)
where we receive the following error:

WARNING! Your cache is running out of filedescriptors

At this point the CPU usage of each squid worker spikes to 100% and
the SNMP agent becomes unavailable.

I was able to get some information from squidclient mgr:info.

My question is around the file descriptor statistics:

When the issue is happening it reports:

File descriptor usage for squid:
Maximum number of file descriptors:   16384
Largest file desc currently in use:548
Number of file desc currently in use:  540
Files queued for open:   0
Available number of file descriptors: 15844
Reserved number of file descriptors:   100
Store Disk files open:   0

After a restart of squid it reports the following:

File descriptor usage for squid:
Maximum number of file descriptors:   65536
Largest file desc currently in use:516
Number of file desc currently in use:  475
Files queued for open:   0
Available number of file descriptors: 65061
Reserved number of file descriptors:   400
Store Disk files open:   0


The ulimit setting for the OS is 8192 for open files. According to the
documentation for max_filedescriptors (which I don't have set in my
squid.conf) the default value is the ulimit value. Therefore why on
first startup does it list the max FD of 65k? Also, when the issue
arises why does it end up dropping to 16k? According to the output
above there is at any point roughly only 1k FD in use, so why is squid
reporting that it is running out of FD?

Any assistance would be appreciated.

Thanks


Re: [squid-users] File Descriptor Issue

2014-06-25 Thread Matthew Ceroni
Amos:

Thanks for the info. I am using workers so that explains why the
output of mgr:info shows such a high number of file descriptors.

If you don't mind I need some further clarification on the output of
mgr:info. Just to summarize I have a global nofile limit of 8192 and
the squid user is set to a soft limit of 4096 and a hard limit of
8192.

The output of mgr:info contains a statistic "Largest file desc
currently in use" (in my case it shows 3956). What exactly is that
indicating? Would it be safe to assume that at some point in time a
max of 3956 FD were in use (assuming each open FD is given a
incrementing number)?

Also, when squid first starts up it lists the Maximum number of file
descriptors as 65536, but when I get the warning message that my Cache
is low on FD that value changes to 16384. Why is that?

Thanks

On Wed, Jun 25, 2014 at 4:37 AM, Amos Jeffries  wrote:
> On 25/06/2014 5:01 a.m., Matthew Ceroni wrote:
>> Hi:
>>
>> I am running squid version 3.3.8 on CentOS v6.5.
>>
>> I have an issue regarding file descriptors. Our proxy has run into an
>> issue the past few days (everyone streaming FIFA soccer I believe)
>> where we receive the following error:
>>
>> WARNING! Your cache is running out of filedescriptors
>>
>> At this point the CPU usage of each squid worker spikes to 100% and
>> the SNMP agent becomes unavailable.
>>
>> I was able to get some information from squidclient mgr:info.
>>
>> My question is around the file descriptor statistics:
>>
>> When the issue is happening it reports:
>>
>> File descriptor usage for squid:
>> Maximum number of file descriptors:   16384
>> Largest file desc currently in use:548
>> Number of file desc currently in use:  540
>> Files queued for open:   0
>> Available number of file descriptors: 15844
>> Reserved number of file descriptors:   100
>> Store Disk files open:   0
>>
>> After a restart of squid it reports the following:
>>
>> File descriptor usage for squid:
>> Maximum number of file descriptors:   65536
>> Largest file desc currently in use:516
>> Number of file desc currently in use:  475
>> Files queued for open:   0
>> Available number of file descriptors: 65061
>> Reserved number of file descriptors:   400
>> Store Disk files open:   0
>>
>>
>> The ulimit setting for the OS is 8192 for open files. According to the
>> documentation for max_filedescriptors (which I don't have set in my
>> squid.conf) the default value is the ulimit value. Therefore why on
>> first startup does it list the max FD of 65k? Also, when the issue
>> arises why does it end up dropping to 16k? According to the output
>> above there is at any point roughly only 1k FD in use, so why is squid
>> reporting that it is running out of FD?
>>
>
> On system startup the user initiating Squid is "root". The set of FD
> limits applicable to root user are used. On later actions the proxy
> low-privilege user account is used and a different set of limits apply
> there.
> The squid.conf directive value can be set if it is either smaller than
> the limit the user account permits, OR if the user account has
> permission to raise the value.
>
> There are also undetermined questions about whether there are bugs in
> the logic finding the FD limit.
>
>
>
> PS. are you running with SMP workers? The 3.3 squid versions display FD
> numbers summing the workers FD limits together and the log entry is made
> if any one worker (or disker) reaches capacity.
>
> Amos


[squid-users] SNMP cacheClients

2014-06-27 Thread Matthew Ceroni
I am monitoring my squid server via SNMP and graphing in Cacti. Of
particular importance to me is the number of clients which is a graph
of the cacheClients statistic (1.3.2.1.15.0). The graph shows we reach
a maximum of 1300 clients.

This seems a bit odd to me as we only have around 200 users. Even if
you double that (each user has their desktop and wireless device) you
don't get anywhere close to 1300. Therefore what is this SNMP value
truly reporting? What constitutes a client? Is it per IP?

Thanks