Re: [squid-users] Evaluating SQUID performance

2013-07-25 Thread John Joseph
HI All 

My Sincere thanks to Amos,Eliezer,Firas,Henrik 

Based on your advice, I  have decided to try out 

 squidclient
     calamaris
     msar
     web-polygraph
Right now trying with squidclient, trying out its options and trying to 
understand.
I have another idea which sprang up, I plan to document my test and write a  
How to do, aiming not for squid proffesionals, but aiming for people like me, 
who had just tried out squid
Thanks to all for the Advice and tips
Great mailing list 

thanks 

Joseph John



RE: [squid-users] Too many open files

2013-07-25 Thread Peter Retief

On 07/25/2013 09:25 AM, Peter Retief wrote:
 I have changed the limits in /etc/security/limits.h to 65K, and I have 
 confirmed that the ulimits for root and squid are now 65K, but 
 squidclient mgr:info still reports a maximum of 16K per worker.
Eliezer:
Ubutnu ???
try to add into the init.d script the ulimit commands in order to force
squid running instance\ running sequences to 65k.

IT worked for me and it should work for you..
Do a restart but first make sure to run an example process with the squid
-f command on another port to get that I am right...

I did reboot after raising the limits, and then before starting squid,
checked ulimit -Sn and ulimit -Hn for both the root user and squid user.
Then after starting squid (running from squid -s, not init script yet), I
did a squidclient mgr:infor and saw 16K per process (actually I saw the
total of 98K for 6 workers, as per Amos's comment on the incorrect
calculation in squidclient, if I interpreted his comment correctly).




RE: [squid-users] Too many open files

2013-07-25 Thread Peter Retief

 To handle the load I have 6 workers, each allocated its own physical 
 disk (noatime).

 I have set ulimit -Sn 16384 and ulimit -Hn 16384, by setting 
 /etc/security/limits.conf as follows:

 #   - Increase file descriptor limits for Squid
 *   softnofile  16384
 *   hardnofile  16384

 The squid is set to run as user squid.  If I login as root, then su 
 squid, the ulimits are set correctly.  For root, however, the ulimits 
 keep reverting to 1024.

 squidclient mgr:info gives:

   Maximum number of file descriptors:   98304
  Largest file desc currently in use:   18824
   Number of file desc currently in use: 1974

 Amos replied:

That biggest-FD value is too high for workers that only have 16K available
each.
Do you mean 
I've just fixed the calculation there (was adding together the values for
each biggest-FD instead of comparing with max())

Do you mean you've patched the source code, and if so, how do I get that
patch?  Do I have to move from the stable trunk?


Note that if one of the workers is reaching the limit of available FD, then
you will get that message from that worker while the others run fine with
less FD consumed.
Can you display the entire and exact cache.log line which that error
message is contained in please?

The first log occurences are:
2013/07/23 08:26:13 kid2| Attempt to open socket for EUI retrieval failed:
(24) Too many open files
2013/07/23 08:26:13 kid2| comm_open: socket failure: (24) Too many open
files
2013/07/23 08:26:13 kid2| Reserved FD adjusted from 100 to 15394 due to
failures
2013/07/23 08:26:13 kid2| '/share/squid/errors/en-za/ERR_CONNECT_FAIL': (24)
Too many open files
2013/07/23 08:26:13 kid2| WARNING: Error Pages Missing Language: en-za
2013/07/23 08:26:13 kid2| WARNING! Your cache is running out of
filedescriptors

Then later:
2013/07/23 10:00:11 kid2| WARNING! Your cache is running out of
filedescriptors
2013/07/23 10:00:27 kid2| WARNING! Your cache is running out of
filedescriptors

After that, the errors become prolific

Thanks for the help.

Peter






Re: [squid-users] Evaluating SQUID performance

2013-07-25 Thread Amos Jeffries

On 25/07/2013 6:30 p.m., John Joseph wrote:

HI All

My Sincere thanks to Amos,Eliezer,Firas,Henrik

Based on your advice, I  have decided to try out

  squidclient
  calamaris
  msar
  web-polygraph
Right now trying with squidclient, trying out its options and trying to 
understand.
FYI: there is not much squidclient options related to what you are 
needing. It is just a fetch client like wget - but able to pull out 
Squid cachemgr self-reports of the performance.



I have another idea which sprang up, I plan to document my test and write a  
How to do, aiming not for squid proffesionals, but aiming for people like me, 
who had just tried out squid


That would be a great addition to our benchmarking pages thank you. All 
the proper external performance benchmarking references we have so far 
are so old they are almost embarassing to point people at.


Amos


[squid-users] caching failed tcp connects to destination ips

2013-07-25 Thread Dieter Bloms
Hi,

we use ipv4 and ipv6 tcp protocol for our outgoing interface.
The most sides are accessable via ipv6, if a  Record is available,
so ipv6 works great in most cases.

Some sides like http://www.hsp-steuer.de/ announce ipv6 records, but are
not accessable via ipv6.

Is it possible that squid notice this fail so that future request will
go to ipv4 directly and the user doesn't have to wait for the long
tcp timeout every time ?
Maybe with a timestamp, so that it will be refreshed after x hours.


-- 
Best regards

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


Re: [squid-users] caching failed tcp connects to destination ips

2013-07-25 Thread Eliezer Croitoru
On 07/25/2013 09:52 AM, Dieter Bloms wrote:
 Hi,
 
 we use ipv4 and ipv6 tcp protocol for our outgoing interface.
 The most sides are accessable via ipv6, if a  Record is available,
 so ipv6 works great in most cases.
 
 Some sides like http://www.hsp-steuer.de/ announce ipv6 records, but are
 not accessable via ipv6.
 
 Is it possible that squid notice this fail so that future request will
 go to ipv4 directly and the user doesn't have to wait for the long
 tcp timeout every time ?
 Maybe with a timestamp, so that it will be refreshed after x hours.
 
 
It depends on what the client wants\needs.
most likely ipv4 is the same as ipv6 with couple things that are not the
same in the network level.
The dns should point to the same resources and allow browsers and
proxies to decide if they will use ipv4 or ipv6.
Then squid can decide on the right choice which was tested by chrome.
Chrome tested first syn faster then use the fastest network address.

A couple hours ipcache is not a good choice since the internet is a
dynamic system.

have you seen ipcache and dns cache yet?

Eliezer




Re: [squid-users] caching failed tcp connects to destination ips

2013-07-25 Thread Amos Jeffries

On 25/07/2013 6:52 p.m., Dieter Bloms wrote:

Hi,

we use ipv4 and ipv6 tcp protocol for our outgoing interface.
The most sides are accessable via ipv6, if a  Record is available,
so ipv6 works great in most cases.

Some sides like http://www.hsp-steuer.de/ announce ipv6 records, but are
not accessable via ipv6.


Send them a bug report?


Is it possible that squid notice this fail so that future request will
go to ipv4 directly and the user doesn't have to wait for the long
tcp timeout every time ?


Yes it is possible and Squid already does.
If you check your cachemgr ipcache report you can see this as the DNS 
results domain/IP mapping list OK/BAD flags on each IP address known. 
BAD will not be used, OK will be tried, success is always a gamble.




Maybe with a timestamp, so that it will be refreshed after x hours.


The DNS lookup result TTL is used, whereupon the DNS server is expected 
to give better working results. Or if all possible IP (both types) are 
tried and all fail the markers are reset and it may be re-tried by some 
other request.


Amos


Re: [squid-users] caching failed tcp connects to destination ips

2013-07-25 Thread Dieter Bloms
Hi Amos,

thank you for your quick answer.

On Thu, Jul 25, Amos Jeffries wrote:

 On 25/07/2013 6:52 p.m., Dieter Bloms wrote:
 Hi,
 
 we use ipv4 and ipv6 tcp protocol for our outgoing interface.
 The most sides are accessable via ipv6, if a  Record is available,
 so ipv6 works great in most cases.
 
 Some sides like http://www.hsp-steuer.de/ announce ipv6 records, but are
 not accessable via ipv6.
 
 Send them a bug report?

I did, but the provider is resistant about this.

 Is it possible that squid notice this fail so that future request will
 go to ipv4 directly and the user doesn't have to wait for the long
 tcp timeout every time ?
 
 Yes it is possible and Squid already does.
 If you check your cachemgr ipcache report you can see this as the
 DNS results domain/IP mapping list OK/BAD flags on each IP address
 known. BAD will not be used, OK will be tried, success is always a
 gamble.

the ipv6 adress 2001:8d8:88c:37e2:3e1b:35f0:e10:1 is not reachable on
port 80, but cachemgr says:

--snip--
www.hsp-steuer.de   33   1110  2( 0) 
2001:8d8:88c:37e2:3e1b:35f0:e10:1-OK

  82.165.11.88-OK
--snip--

so is this a bug in squid, that the ipv6 address is listed as OK ?

-- 
Best regards

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


Re: [squid-users] caching failed tcp connects to destination ips

2013-07-25 Thread Eliezer Croitoru
On 07/25/2013 10:37 AM, Dieter Bloms wrote:
 I did, but the provider is resistant about this.
ask about it in bind users list.
Others will confirm your doubt..

If it's real most likely you it can be reproduced and you will have no
problem with the site.

Eliezer


Re: [squid-users] Too many open files

2013-07-25 Thread Eliezer Croitoru
On 07/25/2013 09:43 AM, Peter Retief wrote:
 Do you mean you've patched the source code, and if so, how do I get that
 patch?  Do I have to move from the stable trunk?
what version are you using?
run `squid -v` to get the version etc..
I assume that else then the RPM I am releasing there aren't much of
updates to LTS\Long life distributions.

You will might need to compile it yourself but I think there is a small
repo for debian and ubuntu out-there.

Eliezer


RE: [squid-users] Too many open files

2013-07-25 Thread Peter Retief
 Peter:
 The first log occurences are:
 2013/07/23 08:26:13 kid2| Attempt to open socket for EUI retrieval
failed:
 (24) Too many open files
 2013/07/23 08:26:13 kid2| comm_open: socket failure: (24) Too many 
 open files
 2013/07/23 08:26:13 kid2| Reserved FD adjusted from 100 to 15394 due 
 to failures

 Amos:
 So this worker #2 got errors after reaching about 990 open FD (16K -
15394). Ouch.

 Note that all these socket opening operations are failing with the Too
many open files error the OS sends back when limiting Squid to 990 or so
FD. This has confirmed that Squid is not mis-calculating  where its limit
is, but something in the OS is actually causing it to limit the worker. The
first one to hit was a socket, but also a disk file access is getting them
soon after so it is likely the global OS limit
 rather than a particular FD type limit. That 990 usable FD is also
suspiciously close to 1024 with a few % held spare for emergency use (as
Squid does when calculating its reservation value).

Amos, any ideas where I should look to see where Ubuntu is restricting the
file descriptors?  I though ulimit -Sn and ulimit -Hn would tell me how
many descriptors any child process should get?




[squid-users] About refresh_pattern

2013-07-25 Thread Ricardo Rios
Hi list, i am trying to cache some application exe files and updates 
using
refresh_pattern, when i check my regex at some online tool tester, 
regex
works great, but when i use it, i dont see anything else then 
TCP_MISS/206

on my logs

Regex:

refresh_pattern download.macromedia.com.*(.exe|.bin) 10800 80% 10800
ignore-no-store ignore-reload reload-into-ims

refresh_pattern armdl.adobe.com/.*\.(exe|msp) 10800  80%  10800
ignore-no-store ignore-reload reload-into-ims


Logs (lots of this):

1374796130.612   1581 10.0.0.58 TCP_MISS/206 23779 GET 
http://download.macromedia.com/get/flashplayer/current/licensing/win/install_flash_player_11_active_x.exe 
- HIER_DIRECT/23.12.163.191 application/octet-stream
1374796131.654997 10.0.0.58 TCP_MISS/206 13916 GET 
http://download.macromedia.com/get/flashplayer/current/licensing/win/install_flash_player_11_active_x.exe 
- HIER_DIRECT/23.12.163.191 application/octet-stream
1374796132.166463 10.0.0.58 TCP_MISS/206 7533 GET 
http://download.macromedia.com/get/flashplayer/current/licensing/win/install_flash_player_11_active_x.exe 
- HIER_DIRECT/23.12.163.191 application/octet-stream


and

1374796907.507   2262 10.0.0.58 TCP_MISS/206 14410 GET 
http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/es_ES/AdbeRdr950_es_ES.exe 
- HIER_DIRECT/208.185.44.66 application/octet-stream
1374796909.198   1670 10.0.0.58 TCP_MISS/206 7160 GET 
http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/es_ES/AdbeRdr950_es_ES.exe 
- HIER_DIRECT/208.185.44.66 application/octet-stream
1374796913.060   2786 10.0.0.58 TCP_MISS/206 14201 GET 
http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/es_ES/AdbeRdr950_es_ES.exe 
- HIER_DIRECT/208.185.44.66 application/octet-stream
1374796915.608   1463 10.0.0.58 TCP_MISS/206 12824 GET 
http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/es_ES/AdbeRdr950_es_ES.exe 
- HIER_DIRECT/208.185.44.66 application/octet-stream


I am using 50gb rock cache, with 4 workers.
maximum_object_size 500 MB


What i am doing wrong, thanks in advance for any answer.



Re: [squid-users] About refresh_pattern

2013-07-25 Thread Amos Jeffries

On 26/07/2013 12:04 p.m., Ricardo Rios wrote:
Hi list, i am trying to cache some application exe files and updates 
using

refresh_pattern, when i check my regex at some online tool tester, regex
works great, but when i use it, i dont see anything else then 
TCP_MISS/206

on my logs



206 Partial Content means only a portaion of the object was received 
back from the server. Squid cannot cache these incomplete objects, so 
refresh_pattern is not relevant.


You want range_offset -1 to make Squid fetch the full object when the 
client requests any sub-portion like this. But be careful, this option 
applis to *all* requests and can cause Squid to fetch large amout fo 
data from the network which are never sent to any client (erasing the 
bandwidth saving benefits of the cache).


Amos


Re: [squid-users] About refresh_pattern

2013-07-25 Thread Ricardo Rios

On 26/07/2013 12:04 p.m., Ricardo Rios wrote:


Hi list, i am trying to cache some application exe files and updates
using refresh_pattern, when i check my regex at some online tool
tester, regex works great, but when i use it, i dont see anything 
else

then TCP_MISS/206 on my logs


206 Partial Content means only a portaion of the object was 
received

back from the server. Squid cannot cache these incomplete objects, so
refresh_pattern is not relevant.

You want range_offset -1 to make Squid fetch the full object when 
the
client requests any sub-portion like this. But be careful, this 
option

applis to *all* requests and can cause Squid to fetch large amout fo
data from the network which are never sent to any client (erasing the
bandwidth saving benefits of the cache).

Amos


Ho i see, all the request have diff size, i dont noted that, thanks 
Amos.


Re: [squid-users] Too many open files

2013-07-25 Thread Eliezer Croitoru
On 07/25/2013 02:10 PM, Peter Retief wrote:
 Amos, any ideas where I should look to see where Ubuntu is restricting the
 file descriptors?  I though ulimit -Sn and ulimit -Hn would tell me how
 many descriptors any child process should get?
many things should happen and still they do not.(this is what I know)
I think that we can try to get some help on that from ubuntu team..

Dont just restart a server without making sure the traffic is fine..
since you are using WCCP I would suggest you to share the setup and then
we can try to help you more later on if needed.

If the setup is right and in place there should be no problem to find
the right place like this:
https://bugs.launchpad.net/ubuntu/+bug/672749
at ubuntu as a starter.

and then notice that there are other parts of linux that apply ulimits:
http://serverfault.com/questions/235356/open-file-descriptor-limits-conf-setting-isnt-read-by-ulimit-even-when-pam-limi

I do not like to redirect but it seems to me like the best choice now.

Also there is a basic assumption that you want to find the source of the
problem and not just make it work??

I would assume that you setup your WCCP correctly.
DO you use them in tunnel or route mode?
in route mode you can easily get into a complex situation that you have
a routing endless loop(until X TTL).

But I assume the problem was solved already??

Eliezer