[squid-users] Re: kid2| WARNING: disk-cache maximum object size is unlimited but mem-cache maximum object size is 32.00 KB

2013-10-27 Thread Ahmad
hi all ,

nice to tell you that i found the solution of this problem

my problem was when i want to shutdown squid ,

i repeat squid -k shutdown many times !!!

thats why a conflict occurs when i start squid let it get down !!

agian , 

the solution , is , if i want to restart squid 
i have to put
squid -k shutdown
then wait some time about 60 sec

then strart it agian with :

squid




with  steps above , no hanging again


thanks for all


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/kid2-WARNING-disk-cache-maximum-object-size-is-unlimited-but-mem-cache-maximum-object-size-is-32-00-B-tp4662892p4662959.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Why are we getting bad percentage of hits in Squid3.3 compared with Squid2.6 ?

2013-10-27 Thread Amos Jeffries

On 28/10/2013 1:29 p.m., Manuel wrote:

Hi Eliezer, thank you for your answer

The origin servers are the same in 2.6 and in 3.3 (in both cases Squid
connect to the same origin remote servers) and the squid.conf is exactly the
same except in the very first lines (since acl manager proto cache_object ,
etc. are obsolote).


2.6 is HTTP/1.0 with a little HTTP/1.1 functionality. Squid-3.3 is fully 
HTTP/1.1.


As Eliezer already mentioned the caching behaviour in each protocol is 
very different. Down to and including the fact that 3.3 caches far 
*more* content than 2.6 will, which can reduce the HIT ratio due to not 
having the storage space for the longer-term objects that were 
previously giving good HIT rate.




The vast majority of the misses are TCP_MISS/200. I checked several times
the last 200 requests to the homepage of our site (the min/max age is 1
minute -but also tried with a few more minutes-) in the access.log file and
these were the results:

Squid 2.6:
1st check: 5 misses of 200 requests
2nd check: 0 misses of 200 requests
3rd check: 2 misses of 200 requests

Squid 3.3:
1st check: 59 misses of 200 requests
2nd check: 32 misses of 200 requests
3rd check: 108 misses of 200 requests

*Nothing was touched between each check, just a pause of a few seconds or
minutes.


What would the URL of that page be?
 What does the tool at redbot.org say about its cacheability?
 And what are the refresh_patterns in use?
 How was the check made?
 How big is your cache storage?

redbot.org is an invaluable tool for identifying what the HTTP behaviour 
is expected to be for any given URL. It tests multiple different request 
types HTTP/1.1 can contain.
The 3.3 versino of Squid with debug_options 11,2 configured will log to 
cache.log a full set of the HTTP request and reply headers on each 
in/outbound connection so you can see what exact case the proxy is 
facing and determine what the expected behaviour should actually have been.


Note that the logged status is client-focused, the server status may be 
very different. Although in the specific case of "TCP_MISS/200" the 
cache storage is not supposed to have been involved so the client and 
server replies should be near identical.



I was think that maybe I should --enable-http-violations in Squid3.3 to get
use of override-expire ignore-reload but I think that it is already enabled
by default since negative_ttl is working properly and requires
--enable-http-violations . Indeed I reduced some misses by using
negative_ttl on squid.conf because Squid3.3 was doing misses with 404
requests while Squid2.6 was doing hits without the need of setting that
directive.


Violations is on by default in the name of tolerance since there are so 
many, many broken clients and servers out there.


If you can tune those 404 responses to contain Cache-Control allowing 
storage it would be better. negative_ttl stores many errors and can 
result in a self-induced DoS against its own clients.


It sounds like you are running the web server. If so you should look 
into the cacheability of response objects then tune the web server 
responses. There are many other caches around the world also storing 
your content and to make the best benefit of them your site needs to 
work without violations. As the Squid warnings say, using violations 
places responsibility for brokenness on the admin using them - dont feel 
obliged to tune your origin server output much for incompetence 
elsewhere, Squid and other proxy software is already doing as much as 
possible to compliance-correct the traffic.


Amos


Re: [squid-users] about netdb?

2013-10-27 Thread Amos Jeffries

On 28/10/2013 8:27 a.m., Beto Moreno wrote:

Hi.

Reading my config file squid-3.1.x, I found a parameter called netdb,
googling a little a found a site, what I understand is that.

netdb is used when u have a bunch of squid cache servers and u use
icp/htcp stuff.

In a company where u used a single cache server for security and speed
up the company network, we don't need netdb settings.

Do I understand right or netdb is useful in my network layout?


netdb is the internal "Network Measurement Database" used by Squid for 
calculating RTT and traffic loadings between itself and any given 
upstream server.


In the recent Squid releases it requires --enable-icmp to enable ICMP 
pinger but also pulls in additional details from ICP, HTCP and HTTP 
connection and response timing. It seems to only be used by the 
background weighted peer selection algorithms such as weighted-round-robin.


Amos



[squid-users] Re: Why are we getting bad percentage of hits in Squid3.3 compared with Squid2.6 ?

2013-10-27 Thread Manuel
Hi Eliezer, thank you for your answer

The origin servers are the same in 2.6 and in 3.3 (in both cases Squid
connect to the same origin remote servers) and the squid.conf is exactly the
same except in the very first lines (since acl manager proto cache_object ,
etc. are obsolote).

The vast majority of the misses are TCP_MISS/200. I checked several times
the last 200 requests to the homepage of our site (the min/max age is 1
minute -but also tried with a few more minutes-) in the access.log file and
these were the results:

Squid 2.6:
1st check: 5 misses of 200 requests
2nd check: 0 misses of 200 requests
3rd check: 2 misses of 200 requests

Squid 3.3:
1st check: 59 misses of 200 requests
2nd check: 32 misses of 200 requests
3rd check: 108 misses of 200 requests

*Nothing was touched between each check, just a pause of a few seconds or
minutes.

I was think that maybe I should --enable-http-violations in Squid3.3 to get
use of override-expire ignore-reload but I think that it is already enabled
by default since negative_ttl is working properly and requires
--enable-http-violations . Indeed I reduced some misses by using
negative_ttl on squid.conf because Squid3.3 was doing misses with 404
requests while Squid2.6 was doing hits without the need of setting that
directive.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-are-we-getting-bad-percentage-of-hits-in-Squid3-3-compared-with-Squid2-6-tp4662949p4662956.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: kid2| WARNING: disk-cache maximum object size is unlimited but mem-cache maximum object size is 32.00 KB

2013-10-27 Thread Linda Walsh

Noticed a few items I would try to simplify (these are suggestions
from my own experience...)...

1) for storio, don't use "rock" -- it limits items on disk to 32k and on 
a 486 you aren't

going to want what it was designed for (multi-core systems).



2) get rid of things you don't need i.e., are you really using delay pools?
cache-digests? icap?, esi?? Are you only trying to run it as
a "transparent proxy"  (enable-linux-netfilter)?

if you have nothing working... try tossing the auth and acl helpers on your
first test version...  Once you get something working, add your needed 
features

back in.

Given that it's your first & only squid proxy, I doubt you need cache 
digests in
your test version, for example (I believe it is used for sharing content 
between

squid servers... )...   simplify until you get something working...




On 10/25/2013 1:58 PM, Ahmad wrote:

hi  i compiled squid 3.3.8 with below :

Squid Cache: Version 3.3.8
configure options:  '--build=i486-linux-gnu' '--prefix=/usr'
'--includedir=/include' '--mandir=/share/man' '--infodir=/share/info'
'--sysconfdir=/etc' '--enable-cachemgr-hostname=drx' '--localstatedir=/var'
'--libexecdir=/lib/squid' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules' '--srcdir=.'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--disable-translation'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-filedescriptors=131072' '--with-large-files'
'--with-default-user=squid' '--enable-linux-netfilter'
'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2' 'LDFLAGS='
'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g 
-Wall -O2' --enable-ltdl-convenience



i follwed example exist in :
http://wiki.squid-cache.org/Features/SmpScale 


the problem is that there is warning occurs and then squid get down ,
here is it !!
*kid2| WARNING: disk-cache maximum object size is unlimited but mem-cache
maximum object size is 32.00 KB*

logs say that squid is down , but i can access squid by port , but as i
think there is no caching occuring in hardsiks !!! tr has fixed sizeand all
logs are tcp miss  not tcp hit !!!

i googled alot but no benefit

here is log file when i start squid :
[root@DataBase ~]# tailf /var/log/squid/backend.cache.log 
2013/10/25 22:47:39 kid2| Logfile: closing log

stdio:/var/log/squid/backend.access.log
2013/10/25 22:47:39 kid4| Open FD UNSTARTED 8 DNS Socket IPv6
2013/10/25 22:47:39 kid4| Open FD UNSTARTED 9 DNS Socket IPv4
2013/10/25 22:47:39 kid4| Open FD READING  13 
2013/10/25 22:47:39 kid2| Open FD UNSTARTED 8 DNS Socket IPv6

2013/10/25 22:47:39 kid2| Open FD UNSTARTED 9 DNS Socket IPv4
2013/10/25 22:47:39 kid2| Open FD READING  13 
2013/10/25 22:47:39 kid3| Squid Cache (Version 3.3.8): Exiting normally.

2013/10/25 22:47:39 kid2| Squid Cache (Version 3.3.8): Exiting normally.
2013/10/25 22:47:39 kid4| Squid Cache (Version 3.3.8): Exiting normally.



2013/10/25 22:50:24 kid4| Preparing for shutdown after 0 requests
2013/10/25 22:50:24 kid4| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid4| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid4| Shutdown: Negotiate authentication.
2013/10/25 22:50:24 kid4| Shutdown: Digest authentication.
2013/10/25 22:50:24 kid4| Shutdown: Basic authentication.
2013/10/25 22:50:24 kid2| Preparing for shutdown after 0 requests
2013/10/25 22:50:24 kid3| Preparing for shutdown after 0 requests
2013/10/25 22:50:24 kid2| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid3| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid2| Closing HTTP port 127.0.0.1:4002
2013/10/25 22:50:24 kid2| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid3| Waiting 30 seconds for active connections to
finish
2013/10/25 22:50:24 kid2| Closing HTTP port 127.0.0.1:4002
2013/10/25 22:50:24 kid3| Closing HTTP port 127.0.0.1:4003
2013/10/25 22:50:24 kid2| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid3| Closing HTTP port 127.0.0.1:4003
2013/10/25 22:50:24 kid2| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid2| Shutdown: Negotiate authentication.
2013/10/25 22:50:24 kid3| Shutdown: NTLM authentication.
2013/10/25 22:50:24 kid3| Shutdown: Negotiate authentication.
2013/10/25 22:50:24 kid2|

Re: [squid-users] question in "cpu_affinity_map" directive

2013-10-27 Thread Alex Rousskov
On 10/26/2013 03:29 PM, Ahmad wrote:

> #  TAG: cpu_affinity_map
> #   Usage: cpu_affinity_map process_numbers=P1,P2,... cores=C1,C2,...
> #   
> #   Sets 1:1 mapping between Squid processes and CPU cores. For example,
> #   
> #   cpu_affinity_map process_numbers=1,2,3,4 cores=1,3,5,7
> #   
> #   affects processes 1 through 4 only and places them on the first   
> #   four even cores, starting with core #1.
> # 
> #   CPU cores are numbered starting from 1. Requires support for
> #   sched_getaffinity(2) and sched_setaffinity(2) system calls.


> it says that the core start with number 1 !!

Cores are _numbered_ starting with number 1. That means that the very
first core is assigned number 1. The second core is assigned number 2.

AFAICT, there is no standard way to number cores. In some environments,
CPU numbering starts from 0, while in others numbering starts with 1.
Squid uses the latter scheme because it just felt more "natural" to
number the second core as #2 rather than #1. I think this was even
discussed on squid-dev, but I failed to find a reference quickly.


> why i cant user core # 0 ???

You can use the first core. The first core is assigned #1 in the
cpu_affinity_map context.


> can i map a  1 squid process to more than one core 

Yes, but not using cpu_affinity_map in squid.conf. Use taskset(1)
instead. Eventually, cpu_affinity_map may support mapping algorithms
other than 1:1.


HTH,

Alex.



Re: [squid-users] something not being understood in ,workers , squid proces , cores mapping

2013-10-27 Thread Alex Rousskov
On 10/27/2013 02:58 AM, Ahmad wrote:

> 1- i want an equation that equal the number of instances for squid  relative
> with cache dir  and  worker number ???
> 
> ex:
> With 3 workers and  1  rock cache =  5 processes running:

The number of Squid processes in a single Squid instance started without
a -N command-line option usually is:

+ 1   master process (counted only in SMP mode) plus
+ W   workers (workers in squid.conf; defaults to 1) plus
+ D   diskers (rock cache_dirs in squid.conf; defaults to 0) plus
+ 1   Coordinator process (exists only in SMP mode).

The terms used above (and in my earlier emails) are defined below.

For example, if you do not explicitly configure Squid workers and rock
cache_dirs, then Squid will run in non-SMP mode and you will get
0+1+0+0=1 Squid process total.

For example, if you explicitly configure Squid with 1 worker and 3 rock
cache_dirs, then Squid will run in SMP mode and you will get 1+1+3+1=6
Squid processes total.


> 2-wts the difference between squid instance and worker ??? 

Please see the definitions below.


> why its better to give worker a core not like cores of "rock disk" ???

Sorry, I do not understand this question. Please rephrase, but keep in
mind that a busy Squid diskers are blocked on disk I/O most of the time
while workers usually consume a lot of CPU cycles (and should virtually
never be blocked until Squid becomes overloaded).


> 3-which will scan and read squid.conf first , is it the instance of squid ??
> or the worker of squid ??

The master process (defined below) interprets squid.conf first.


And here are the definitions of the terms I have been using:

Instance: All processes running as a result of a single "squid" command.
This includes, but is not limited to, kid processes defined below.

Kid: A Squid process (i.e., a process running Squid executable code)
created by the Master process. Coordinator, worker, and diskers defined
below are often Squid kids.

Worker: A Squid process accepting HTTP or HTTPS requests. Workers are
usually created by the Master process. In general, workers are
responsible for most transaction processing but may outsource some of
their work to helpers (directly), other workers (via Coordinator), or
even independent servers (via ICAP, DNS, etc).

Disker: A Squid process dedicated to cache_dir I/O. Diskers are created
by the Master process. Today, only Rock cache_dirs may use diskers.

Coordinator: A Squid process dedicated to synchronizing other kids.

Master: The first Squid process created when you run a "squid" command.
The Master process is responsible for starting and restarting all kids.
This definition is not 100% accurate because the OS creates the first
process and that first Squid process then forks the actual Master
process to become a daemon (except for "squid -N"). Since that first
OS-created process exits immediately after fork, this inaccurate
definition works OK for most purposes. Better wording is welcomed!

SMP mode: Squid is said to be working in SMP mode when the sum of the
number of worker and disker processes exceeds one. Here are three randm
examples of a Squid instance working in SMP mode: 2 workers and 0
diskers; 1 worker and 1 disker; 2 workers and 3 diskers. Sometimes, the
same "SMP mode" term is used to mean "multiple workers"; that usage
excludes configurations with a single worker and multiple diskers; such
usage should be avoided.


Please note that the same process may play multiple roles. For example,
when you start Squid with the -N command line option, there will be only
one Squid process running and that single process plays the roles of
Master and Worker.

Finally, please note that others may use different definitions.


HTH,

Alex.



[squid-users] about netdb?

2013-10-27 Thread Beto Moreno
Hi.

Reading my config file squid-3.1.x, I found a parameter called netdb,
googling a little a found a site, what I understand is that.

netdb is used when u have a bunch of squid cache servers and u use
icp/htcp stuff.

In a company where u used a single cache server for security and speed
up the company network, we don't need netdb settings.

Do I understand right or netdb is useful in my network layout?

Thanks for your time.


Re: [squid-users] Re: 3x cpu usage after upgrade 3.1.20 to 3.3.8

2013-10-27 Thread Eliezer Croitoru

Hey,

Can you try to not look at the SNMP but look at a top output?
Just wondering if there is a possibility of something else.
what top or htop shows??
top has the "1" option which shows you all the cpus in the machine.

Eliezer

On 10/27/2013 11:03 AM, Omid Kosari wrote:



Amos Jeffries-2 wrote

Is traffic speed 2-3 times faster or higher as well?


No .



Amos Jeffries-2 wrote

Is disk I/O processing higher? (a large rock store swapping data to/from
disk would cause both CPU and disk increases)


No .

Everything is same as before . I have 2 squid boxes . one of them has rock ,
another even doesn't has rock . just upgraded and both boxes has increased
cpu usage .






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3x-cpu-usage-after-upgrade-3-1-20-to-3-3-8-tp4662906p4662943.html
Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] Why are we getting bad percentage of hits in Squid3.3 compared with Squid2.6 ?

2013-10-27 Thread Eliezer Croitoru

Hey Manuel,

Squid 2.6 and 3.3.9 have couple changes that might cause this "side effect".
There is a big different from lower *hard* hits that being logged as 
TCP_HIT to TCP_MISS/304 and many other combinations.


Analyzing squid access.log in 3.3 branch is not the same as 2.6\7\other.

If the origin server is serving an object that cannot be cached by 
default troubles are probable to happen.
If the objects from the origin server are cachable by default then it 
will be very simple to track down why the TCP_MEM_HIT or TCP_HIT has 
lower rates.
Notice that TCP_MISS/304 can reflect a HIT that squid cache actually 
made possible but the client software just verified that the object that 
squid made possible to be cached was fetched from local cache(browser or 
other).
Squid has in the new branches a re-validation function that can seem 
like not a HIT while it is an integrity check to make sure that the 
client is fetching or fetched the right and up-to-date object from cache.
To make sure that 3.3.9 has high hit rate you will need more then just 
"HITS VS MISSES".

You will need to understand how the logs and what the logs reflects.

If you want a more accurate verification get one copy of the 
access.log(if there is one) and try to make sure you analyzed it in depth.

Can you find TCP_MISS/304 in the logs? if so how much of them?

If you need more help I'm here.

Eliezer


On 10/27/2013 06:43 PM, Manuel wrote:

Hi,

We are moving to new more powerful servers with CentOS 6 64bit instead of
CentOS 5, with SSD instead of HDD and with Squid 3.3.9 instead of Squid
2.6.STABLE21. We are running Squid as a reverse proxy.

The performance of Squid 3.3.9 seems to be excellent tested with around
125087 "clients accessing cache" and 2 "file descriptors in use" in a
single server but there is one big problem: we are getting much worst
percentage of hits in Squid 2.6 in comparison with the same config in Squid
2.6. With the same config we are getting around 99% of hits in Squid 2.6 and
just 55-60% of hits on Squid 3.3.9

We are using the refresh_pattern options override-expire ignore-reload (as
mentioned before is the same squid.conf config in every server -old and new
ones-)

Any idea on what might be the problem or any suggestion on how to find it?

Thank you in advance

PS: Since we have lots of concurrent connections requesting the same
addresses we expect to add to the main webpages the Cache-Control
stale-while-revalidate=60 header which I guess that will increase the number
of hits on the servers running Squid 3.3.9 but for the moment the goal is to
first try to reach a more similar percentage to Squid 2.6 in the same
situation since stale-while-revalidate would be ignored by Squid 2.6.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-are-we-getting-bad-percentage-of-hits-in-Squid3-3-compared-with-Squid2-6-tp4662949.html
Sent from the Squid - Users mailing list archive at Nabble.com.





[squid-users] Why are we getting bad percentage of hits in Squid3.3 compared with Squid2.6 ?

2013-10-27 Thread Manuel
Hi,

We are moving to new more powerful servers with CentOS 6 64bit instead of
CentOS 5, with SSD instead of HDD and with Squid 3.3.9 instead of Squid
2.6.STABLE21. We are running Squid as a reverse proxy.

The performance of Squid 3.3.9 seems to be excellent tested with around
125087 "clients accessing cache" and 2 "file descriptors in use" in a
single server but there is one big problem: we are getting much worst
percentage of hits in Squid 2.6 in comparison with the same config in Squid
2.6. With the same config we are getting around 99% of hits in Squid 2.6 and
just 55-60% of hits on Squid 3.3.9 

We are using the refresh_pattern options override-expire ignore-reload (as
mentioned before is the same squid.conf config in every server -old and new
ones-)

Any idea on what might be the problem or any suggestion on how to find it?

Thank you in advance

PS: Since we have lots of concurrent connections requesting the same
addresses we expect to add to the main webpages the Cache-Control
stale-while-revalidate=60 header which I guess that will increase the number
of hits on the servers running Squid 3.3.9 but for the moment the goal is to
first try to reach a more similar percentage to Squid 2.6 in the same
situation since stale-while-revalidate would be ignored by Squid 2.6.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-are-we-getting-bad-percentage-of-hits-in-Squid3-3-compared-with-Squid2-6-tp4662949.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] ICAP Error

2013-10-27 Thread Eliezer Croitoru

Hey Roman,

Can you get the output for ICAP debug sections and access.log output?
use "debug_options ALL,1 93,6"
this will give more information on the ICAP issue.
in a case of HTTPS requests there is no way to use ICAP on these 
requests else then the basic "CONNECT" request which is either IP+port 
or DOMAIN+port.

Are you bumping the SSL traffic on this squid instance?

Eliezer

On 10/27/2013 04:28 PM, Roman Gelfand wrote:

For 99.9% of the sites, my icap services are working,   There
instances where I am getting the following icap error.   Not sure as
to how to start debugging it.  I am using the latest squid and icap
versions.


The following error was encountered while trying to retrieve the URL:
https://www.flowroute.com/accounts/login/

ICAP protocol error.

The system returned: [No Error]

This means that some aspect of the ICAP communication failed.

Some possible problems are:

The ICAP server is not reachable.

An Illegal response was received from the ICAP server.


Thanks in advance





[squid-users] ICAP Error

2013-10-27 Thread Roman Gelfand
For 99.9% of the sites, my icap services are working,   There
instances where I am getting the following icap error.   Not sure as
to how to start debugging it.  I am using the latest squid and icap
versions.


The following error was encountered while trying to retrieve the URL:
https://www.flowroute.com/accounts/login/

ICAP protocol error.

The system returned: [No Error]

This means that some aspect of the ICAP communication failed.

Some possible problems are:

The ICAP server is not reachable.

An Illegal response was received from the ICAP server.


Thanks in advance


Re: [squid-users] Re: Squid naps each 3600 seconds !

2013-10-27 Thread Amos Jeffries

On 27/10/2013 10:51 p.m., Omid Kosari wrote:

Following grabbed from cachemgr.cgi when digest is enabled . May i be sure
that digest is not choosed by squid itself and is it safe for me to
"digest_generation off"?



That looks okay. Digests are not being generated or received.

Amos


Peer Selection Algorithms wrote

no guess stats for all peers available

Per-peer statistics:

peer digest from 1.1.1.12
no guess stats for 1.1.1.12 available

event   timestamp   secs from now   secs from init
initialized 1382745606  -119863 +0
needed  1382745843  -119626 +237
requested   1382822346  -43123  +76740
received1382822386  -43083  +76780
next_check  1382899186  +33717  +153580

peer digest state:
needed: yes, usable:  no, requested:  no

last retry delay: 76800 secs
last request response time: 40 secs
last request result: Forbidden

peer digest traffic:
requests sent: 9, volume: 0 KB
replies recv:  9, volume: 2 KB

peer digest structure:
no in-memory copy


No peer digest from 1.1.1.12


Algorithm usage:
Cache Digest:   0 (  0%)
Icp:29887 (100%)
Total:  29887 (100%)





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662944.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] something not being understood in ,workers , squid proces , cores mapping

2013-10-27 Thread Amos Jeffries

On 27/10/2013 9:58 p.m., Ahmad wrote:

hi ,
about smp and workers
,
just want to understand

1- i want an equation that equal the number of instances for squid  relative
with cache dir  and  worker number ???

ex:
With 3 workers and  1  rock cache =  5 processes running:

i want general  fourm for this above


"Sum of all componentes configured which cause instances to start" is 
the closest we can give in the way of general formula.


Squid has not been a single-process system since v2.0, possibly earlier. 
There are always background helper processes doing things for some 
component or another. The new Disker processes are equivalent to the old 
cache_dir diskd helpers but for rock storage format and are SMP-aware. 
Also, it *will* be changing as new SMP support is added/modified.


Get your configuration working the way you want it to go before trying 
to tune instances to cores. By that time you should have cache.log 
records to identify what each process number is doing to base the 
affinity on.


The SMP macros and if-else-endif should always be treated with care and 
used as sparingly as possible. They are temporary workarounds for the 
current incomplete or missing SMP support of some components, and we 
expect to break configurations that use them in future as SMP-support 
improves.




===
2-wts the difference between squid instance and worker ???


"squid instance" is a fuzzy term. Its meaning changes depending on 
whether you enabled SMP mode or non-SMP mode, or debug non-daemon mode.
I think of it as meaning any instance of process running out of the 
"squid" binary.


worker is a specific type of squid instance


  im
misunderstanding !! why its better to give worker a core not like cores of
"rock disk" ???


... worker process does all HTTP I/O and protocol handling. Currently 
they also do a lot of non-HTTP protocols actions like SNMP. But that is 
planned to change eventually.



isnt  worker  is  squid instance ??!!!


Yes. Worker is a type of squid instance.


3-which will scan and read squid.conf first , is it the instance of squid ??
or the worker of squid ??


The first one to read the config file is something else you may not have 
noticed yet. The first instance of squid binary to run is the daemon 
manager. It reads the config file and determines how many processes to 
fork into (if any). The result is all the kidN processes, forked at 
different times as determined by the configuration.
 It then performs the high-availability monitoring to ensure that 
either the coordinator (SMP-mode) or the single worker process (non-SMP 
mode) always stays running.


FYI: this design is one reason why Squid is so unfriendly with upstart 
and systemd. They try to be daemon managers themselves, and things dont 
work well if you have two managers giving conflicting HA instructions to 
a set of processes (one manager says shutdown, the other notices outage 
and auto-starts a replacement ... etc, etc). Or one manager is managing 
the other, which has no useful effects.


Amos


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-27 Thread Omid Kosari
Following grabbed from cachemgr.cgi when digest is enabled . May i be sure
that digest is not choosed by squid itself and is it safe for me to
"digest_generation off"?



Peer Selection Algorithms wrote
> no guess stats for all peers available
> 
> Per-peer statistics:
> 
> peer digest from 1.1.1.12
> no guess stats for 1.1.1.12 available
> 
> event timestamp   secs from now   secs from init
> initialized   1382745606  -119863 +0
> needed1382745843  -119626 +237
> requested 1382822346  -43123  +76740
> received  1382822386  -43083  +76780
> next_check1382899186  +33717  +153580
> 
> peer digest state:
>   needed: yes, usable:  no, requested:  no
> 
>   last retry delay: 76800 secs
>   last request response time: 40 secs
>   last request result: Forbidden
> 
> peer digest traffic:
>   requests sent: 9, volume: 0 KB
>   replies recv:  9, volume: 2 KB
> 
> peer digest structure:
>   no in-memory copy
> 
> 
> No peer digest from 1.1.1.12
> 
> 
> Algorithm usage:
> Cache Digest:   0 (  0%)
> Icp:29887 (100%)
> Total:  29887 (100%)





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662944.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: 3x cpu usage after upgrade 3.1.20 to 3.3.8

2013-10-27 Thread Omid Kosari


Amos Jeffries-2 wrote
> Is traffic speed 2-3 times faster or higher as well?

No .



Amos Jeffries-2 wrote
> Is disk I/O processing higher? (a large rock store swapping data to/from 
> disk would cause both CPU and disk increases)

No .

Everything is same as before . I have 2 squid boxes . one of them has rock ,
another even doesn't has rock . just upgraded and both boxes has increased
cpu usage .






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3x-cpu-usage-after-upgrade-3-1-20-to-3-3-8-tp4662906p4662943.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] something not being understood in ,workers , squid proces , cores mapping

2013-10-27 Thread Ahmad
hi ,
about smp and workers
,
just want to understand 

1- i want an equation that equal the number of instances for squid  relative
with cache dir  and  worker number ???

ex:
With 3 workers and  1  rock cache =  5 processes running:

i want general  fourm for this above
===
2-wts the difference between squid instance and worker ??? im
misunderstanding !! why its better to give worker a core not like cores of
"rock disk" ???

isnt  worker  is  squid instance ??!!!  



3-which will scan and read squid.conf first , is it the instance of squid ??
or the worker of squid ??




regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/something-not-being-understood-in-workers-squid-proces-cores-mapping-tp4662942.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: does rock type deny being dedicated to specific process ??

2013-10-27 Thread Amos Jeffries

On 27/10/2013 8:01 p.m., Ahmad wrote:

hi amos ,
i read bad news about rock when rock shared between process , i read that it
reduce hit ratio !

i read form
http://wiki.squid-cache.org/Features/RockStore
it says :
/Objects larger than 32,000 bytes cannot be cached when cache_dirs are
shared among workers. Rock Store itself supports arbitrary slot sizes, but
disker processes use IPC I/O (rather than Blocking I/O) which relies on
shared memory pages, which are currently hard-coded to be 32KB in size. You
can manually raise the shared page size to 64KB or even more by modifying
Ipc::Mem::PageSize(), but you will waste more RAM by doing so. To
efficiently support shared caching of larger objects, we need to teach Rock
Store to read and write slots in chunks smaller than the slot size. /

as i understood , the max object size for disk caching will be 32 k ,
am i correct ???
i see it will be slow writing on hardisks an slow caching !

am i correct ???


*for things stored in the rock cache_dir* only. non-SMP cache_dir such 
as AUFS still cache larger items.


Im not sure if the imitation applied to memory cached objects, but when 
SMP is enabled that is likely as well.


If this is a critical issue for you please try out the large-rock 
experimental feature branch. It has changes which remove those 
limitations and also includes collapsed-forwarding port from squid-2.7 
to allow backend fetches to be HIT on before they have finished arriving.



thats why i want to give each hardisk of my disks  of rock type a single
process !!


The limitation applies to rock storage type regardless of SMP sharing. 
It is designed to work with in those same limits.



=
also , im not understating u here ,
* If you use ${process_number} or
${process_name} macros these channels are never setup and things WILL break.
*




With three workers and a rock cache there are actually 5 processes running:

kid1 - worker #1  ... ${process_number} = 1
kid2 - worker #2  ... ${process_number} = 2
kid3 - worker #3  ... ${process_number} = 3
kid4 - rock disker #1 ... ${process_number} = 4
kid5 - coordinator  ... ${process_number} = 5

[I'm not completely sure of remembering order between coordinator and 
disker, may be the other way around].


If you configure each process to access a different FS directory name 
for the rock dir. You end up with disker creating a rock DB at /rock4 
when the backend workers trying to use /rock2, and /rock3. The 
coordinator thinks the rock dir exists at /rock5.
* None of the processes will accept SMP packets about altering or 
fetching rock dir contents in an area they are not configured to use.
* the workers will try to connect to diskers setup for the /rock2 and 
/rock3 - which do not exist. This is the shm_open connection error you see.
* the other /rock4 message is disker or coordinator trying to open to 
receive messages.


Amos


Re: [squid-users] Re: question in "cpu_affinity_map" directive

2013-10-27 Thread Amos Jeffries

On 27/10/2013 7:53 p.m., Ahmad wrote:

well , but again , why  we cant use core 0 in mapping ?



IIRC, 0 has some special meaning on the interface we use with the kernel 
or how Squid handles whether to set the affinity for that process. It is 
also best to reserve one CPU core for kernel and other uses, so leaving 
0 out is quite convenient design to resolve both these with one limit.


Amos



[squid-users] Re: does rock type deny being dedicated to specific process ??

2013-10-27 Thread Ahmad
hi amos , 
i read bad news about rock when rock shared between process , i read that it
reduce hit ratio !

i read form 
http://wiki.squid-cache.org/Features/RockStore
it says :
/Objects larger than 32,000 bytes cannot be cached when cache_dirs are
shared among workers. Rock Store itself supports arbitrary slot sizes, but
disker processes use IPC I/O (rather than Blocking I/O) which relies on
shared memory pages, which are currently hard-coded to be 32KB in size. You
can manually raise the shared page size to 64KB or even more by modifying
Ipc::Mem::PageSize(), but you will waste more RAM by doing so. To
efficiently support shared caching of larger objects, we need to teach Rock
Store to read and write slots in chunks smaller than the slot size. /

as i understood , the max object size for disk caching will be 32 k , 
am i correct ???
i see it will be slow writing on hardisks an slow caching !

am i correct ???


thats why i want to give each hardisk of my disks  of rock type a single 
process !!
=
also , im not understating u here ,
* If you use ${process_number} or
${process_name} macros these channels are never setup and things WILL break.
*



regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/does-rock-type-deny-being-dedicated-to-specific-process-tp4662919p4662938.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid3 on CentOS 6 and tproxy

2013-10-27 Thread Amos Jeffries

On 27/10/2013 7:52 p.m., Ahmad wrote:

hi ,
ls -laR /lib/modules/`uname -r`/ | grep tproxy

-rwxr--r--.  1 root root   5632 Oct 16 21:38 nf_tproxy_core.ko

this module  which i found !! , did u mean it ??!!

but i think this is a good step from centos , because compiling kernel get a
hard work and may cause errors !!!


again , does debian do that  in last ver of it ?


Yes they do.
There were other problems in the 2.6 kernels though and I have not tried 
it recently on the 3.x kernels.


Amos