Re: [squid-users] Cancel this mailing list

2012-10-23 Thread Amos Jeffries

On 24.10.2012 13:43, Kavin Xiao wrote:

Hi,

How cancel this mailing list

Thanks



Follow the instructions right next to the ones you used to sign up:
  http://www.squid-cache.org/Support/mailing-lists.html

Amos


[squid-users] Cancel this mailing list

2012-10-23 Thread Kavin Xiao
Hi,

How cancel this mailing list

Thanks


- Original Message - 
From: "Amos Jeffries" 
To: 
Sent: Wednesday, October 24, 2012 8:35 AM
Subject: Re: [squid-users] Squid 3.1 Client Source Port Identity Awareness


> On 24.10.2012 07:55, Alexander.Eck wrote:
>> Hi everyone,
>>
>> is it possible to have squid use the same Source Port to connect to 
>> the Web=
>> server as the client uses to connect to squid ?
>>
> 
> No. One gets errors when bind() is used on an already open port.
> connect() and sendto() do not supply the OS with IP:port details.
> 
> 
>>
>> My problem is the following setup:
>>
>> Various Citrix Server
>> URL Filtering with Identity Awareness
>> Squid 3.1 as Cache Proxy
>>
>> I had to install a Terminal Server Identity Agent on every Citrix 
>> Server to=
>>  distinguish the users.
>>
>> The Identity Agent assigns port ranges to every user, to distinguish 
>> them.
>>
>>
>> Problem is:
>> In my firewall logs i can see the identity of the user for the 
>> request from=
>>  the citrix server to the proxy (proxy is in the dmz). But i can't 
>> see the =
>> identity from the request from the proxy to the Internet.
>>
>> My guess is, that this is because squid isn't using the same Source 
>> Port as=
>>  the client, or is not forwarding the Source Port.
> 
> "client" also does not mean what you think it means. Squid is a client 
> in HTTP and can generate new or different requests along with those 
> aggregated from its inbound clients.
> 
> HTTP/1.1 is also stateless with multiplexing and pipelines. Any 
> outgoing connection can be shared by requests received between multiple 
> inbound client connections. There is no relationship between inbound and 
> outbound - adding a stateful relationship (pinning) degrades performance 
> a LOT.
> 
> How does your fancy client identification system correlate them 
> cheeses?
> 
> PS: the TCP/IP firewall level is not a good place to log HTTP level 
> client details.
> 
>>
>> Did anybody try something similiar and got it working ?  Is squid 
>> capable o=
>> f doing this or do i have an error in reasoning about my setup ?
>>
>> Any help is appreciated :)
> 
> 
> Amos

Re: [squid-users] Squid 3.1 Client Source Port Identity Awareness

2012-10-23 Thread Amos Jeffries

On 24.10.2012 07:55, Alexander.Eck wrote:

Hi everyone,

is it possible to have squid use the same Source Port to connect to 
the Web=

server as the client uses to connect to squid ?



No. One gets errors when bind() is used on an already open port.
connect() and sendto() do not supply the OS with IP:port details.




My problem is the following setup:

Various Citrix Server
URL Filtering with Identity Awareness
Squid 3.1 as Cache Proxy

I had to install a Terminal Server Identity Agent on every Citrix 
Server to=

 distinguish the users.

The Identity Agent assigns port ranges to every user, to distinguish 
them.



Problem is:
In my firewall logs i can see the identity of the user for the 
request from=
 the citrix server to the proxy (proxy is in the dmz). But i can't 
see the =

identity from the request from the proxy to the Internet.

My guess is, that this is because squid isn't using the same Source 
Port as=

 the client, or is not forwarding the Source Port.


"client" also does not mean what you think it means. Squid is a client 
in HTTP and can generate new or different requests along with those 
aggregated from its inbound clients.


HTTP/1.1 is also stateless with multiplexing and pipelines. Any 
outgoing connection can be shared by requests received between multiple 
inbound client connections. There is no relationship between inbound and 
outbound - adding a stateful relationship (pinning) degrades performance 
a LOT.


How does your fancy client identification system correlate them 
cheeses?


PS: the TCP/IP firewall level is not a good place to log HTTP level 
client details.




Did anybody try something similiar and got it working ?  Is squid 
capable o=

f doing this or do i have an error in reasoning about my setup ?

Any help is appreciated :)



Amos


Re: [squid-users] feature request: setting location of coordinator.ipc and kidx.ipc during runtime?

2012-10-23 Thread Amos Jeffries
On 24.10.2012 03:38, Rietzler, Markus (RZF, SG 324 / 
) wrote:

hi,

we want to use squid with smp workers.
workers are running fine. now also logroate works (although not as
expected. see my other mail "[squid-users] question of understanding:
squid smp/workers and logfiles", works only with access_log for each
worker not one single one).

now there is only one problem.

when we compile squid we use

./configure --prefix /default/path/to/squid

in our production environment squid lies under a different path (eg.
/path/to/squid). we also use several instances of squid, etc. one
internet, one intranet, one extranet etc. each one with its own
directory structure like etc, run, log, cache etc.

via squid.conf we can set every required path (log, log_file_daemon,
icons, error, unlinkd etc) but not for the ipc-location.

in src/ipc/Port.cc the location is hardcoded:

const char Ipc::coordinatorAddr[] = DEFAULT_STATEDIR 
"/coordinator.ipc";

const char Ipc::strandAddrPfx[] = DEFAULT_STATEDIR "/kid";

I can patch src/ipc/Makefile to have localstatedir point to a other
dir then /default/path/to/squid/var (that's how localstatedir will be
expanded in the Makefile). but this is not really what we want. we
want to be able to have the location set via squid.conf or 
environment

var during runtime.

we tried to use something like

const char Ipc::coordinatorAddr[] = Config.coredump_dir 
"/coordinator.ipc";


but then we get compile erros.

is it possible to create some patch to have to set the location of
ipc-files during runtime.


Yes and no.

These are network sockets needing to be accessed by all instances of 
the multiple processes which form Squid. There is no reason to touch or 
change them.
 If we allow reconfiguration of where one is placed, anyone could 
accidentally place that inside if...else conditions and will be unable 
to operate their Squid reliably when the internal communication channels 
to the coordinator become disconnected.
 If we allowed you to register multiple "/some/shared/kid1.ipc" then 
start several differently configured Squid you could face the second 
instance crashing with unable to open socket errors or you could zombie 
the existing process, or you could cause crossover between the two 
coordinators or the two workers.
We really do not want to have to assist with debugging that type of 
problem needlessly



The SMP support in Squid is designed to remove any reason why you 
should need to operate multiple different Squid installations on one 
box. It is almost but not quite complete, if you find a particular 
feature (like that logs bug) you need to segment but are unable to do so 
please pint out. The UDS channel sockets notwithstanding as they are the 
mechanism by which segmentation is coordinated and enforced.



To operate Squid with multiple segregated run-time environments for 
different clients I suggest you look at re-designing your squid.conf 
along these lines:


squid.conf:
 workers 3
 /etc/squid/squid.conf.${process_id}


With squid.conf.1, squid.conf.2, squid.conf.3 containing a complete 
copy of what would have been squid.conf for the environment you want to 
present to your client base that process is serving.
 When you need to guarantee a per-worker resource like log files use 
${process_id} as art of the path or filename like the above example. You 
can also use ${process_name} the same way.


FUN: If you need two workers to both present one shared environment you 
can use symlinks to point squid.conf.4 at squid.conf.5 for example and 
the coordinator will ensure they share resources as well as config 
files.

 * clashes with using the ${process_id} macro in paths

MORE FUN: to share resources between environments, just configure the 
same lines for the cache location etc in multiple per-worker squid.conf. 
Again the coordinator will link the processes together with the shared 
resource.


PS: we currently only provide one shared memory cache. So segmenting 
that is not possible the old style local caches can be used instead. TMF 
have a project cleaning up the cache systems underway to make things 
more flexible, get in touch if you need any changes there.


Amos


Re: [squid-users] question of understanding: squid smp/workers and logfiles

2012-10-23 Thread Amos Jeffries
On 24.10.2012 01:32, Rietzler, Markus (RZF, SG 324 / 
) wrote:

we are using squid 3.2.3 with smp workers.

in http://wiki.squid-cache.org/Features/SmpScale it is written that
workers can share logs.
in the docu it is also mentioned, that one should upgrade to the
faster and better logfile daemon.

we are using the following (log) config:

workers 3
logfile_daemon /path/to/log_file_daemon
logfile_rotate 14
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %access_log daemon:$SQUID_FILES/logs/access_kid${process_number}.log 
squid


and do a rotate, I get the access_kidX.log,0. so each worker is
rotating its own logfile.

so this will work, but can we have on single logfile for all workers
and have a correct working rotating?


They do share a log file. Unfortunately there is a bug and they each 
perform their own rotate signal - resulting in splitting out into 
separate numbered logs every rotate.


Amos


Re: [squid-users] Re: Squid 3.2.2 + localhost

2012-10-23 Thread Amos Jeffries

On 23.10.2012 23:08, alberto.desi wrote:
Yes, I've read the mail but I think that it is better to post it 
here... ;-)

The main problem is that if I rewrite the $url (localhost [::1], or
[5001::52] [3001::52] that are the addresses of the interfaces) it 
doesn't

work. If I rewrite $url with "302:" code in front it works...


Because that is NOT re-write. That is redirect - ie how HTTP is 
designed to operate, and it works far better than re-writing hacks do.


You can redirect using 301, 302, 303, or 307 status codes depending on 
the wanted behaviour from the client when its handling the redirect. 
They offer a matrix for temporary versus permanent change of URL (eg 
update bookmarks and cached reference content), and to alter the method 
to GET versus retain the existing one when passing the request to the 
new URL.


You should only ever need to re-write the URL if you are altering part 
of the path segment or query string parameters. The only time when you 
may need to re-write the URL to a different destination is when


To simply *route* the requests via a different upstream server use a 
cache_peer directive to setup a fixed TCP connection to the server.



but the
behavior of the system is completely different.
Replying to your requests, I'm working on 4 virtual machine called 
origin,
core, mobile1, mobile2. In the origin and mobile I have apache 
servers
running... those are my caches!!! When I write localhost, I want to 
redirect

to the apache link


Aha. Thank you. That was one of the confusing things - your names for 
the machines are not aligning with the common networking terminology for 
what they do.


Because Squid is a *type* of software called a 'cache/caching proxy' 
and Apache is a *type* called 'origin server'. 'localhost' is ::1 or 
127.0.0.1.





example:
I receive GET for http://[6001::101]/Big/big1.avi and I want to 
rewrite it
like http://[5001::52]/Big/big1.avi, that is the link to the apache 
in the

same machine where Squid is installed. This is not working. But if I
redirect to another machine http://[4001::52]/Big/big1.aviwith apache 
it

works.

OK?


Okay. Start with forgetting re-write and redirect. What you are doing 
is HTTP routing.


Which means you configure cache_peer for each of your Apache servers. 
Do something to identify where the request is supposed to go. And have 
Squid relay the request there. No need for changing it in any way.



To identify where to send it you have your script.

Use the external_acl_type helper interface to call your Script. This 
does three important things:

 1) offers you far more parameters than the old url_rewrite interface.
 2) can be called at any time in the ACL processing chain
 3) provides tagging and a few other details feedback from the helper 
to Squid.



I will leave you the study of figuring out what external_acl_type % 
format codes are needed by your helper. Here is the documentation: 
http://www.squid-cache.org/Doc/config/external_acl_type/


I suggest instead of sending back an altered URL send back "ERR" for no 
change, and "OK tag=X" for a change, with X being a tag assigned to 
identify one of the Apache (could be the Apache IP for example).


Then add something like the following to squid.conf:

  external_acl_type whichServer ...
  acl findServer external whichServer

  # allow IF and only if we have a backend to send it to
  http_access allow findServer
  http_access deny all

  # check if your helper sent "OK tag=4001" and pass it to server 
[4001::101]

  acl apache4001Okay tag 4001
  cache_peer [4001::101] parent 80 0 ... name=Apache4001
  cache_peer_access allow Apache4001 allow tag4001
  cache_peer_access allow Apache4001 deny all

  # check if your helper sent "OK tag=5001" and pass it to server 
[5001::101]

  acl apache4001Okay tag 5001
  cache_peer [5001::101] parent 80 0 ... name=Apache5001
  cache_peer_access allow Apache5001 allow tag5001
  cache_peer_access allow Apache5001 deny all


Then you just have to check your backend Apache are setup to handle the 
client requests which they will receive exactly as if the client was 
contacting them directly - with all client TCP and HTTP level details 
unchanged (ie fully transparent proxy).


Amos



*[rewriter_code]*
#!/usr/bin/perl

use warnings;
use strict;
use Fcntl ':flock';

require '/home/alberto/NodeConfig.pm';#
dir/.../NodeConfig.pm
my $dirDB = ($NodeConfig::dir_DB);#
directory local database
my $db_name = ($NodeConfig::name_DB); # 
name of

local database
my $node_address = ($NodeConfig::node_address);   # 
MAR's

address
my $DM_address = ($NodeConfig::DM_address);   # 
DM's

address
my $dir_apache = ($NodeConfig::dir_Apache);   #
directory of contents (Apache server)
my $dir_DM = ($NodeConfig::dir_DM);   #
directory where there is the DM_req.pl in DM
my $rootpwd = ($Nod

Re: [squid-users] TPROXY Timeouts on Select Websites

2012-10-23 Thread Matthew Goff
On Mon, Oct 22, 2012 at 10:40 PM, Amos Jeffries  wrote:
> If I am reading that correctly you are saying the ICMPv6 'too big' packets
> are not going to Squid, but to the client machine?
> Which would make it a TPROXY bug, since the outbound connection from Squid
> is where the MTU should be lowered at the kernel level.
>  Or are they *addressed* to the client machine and caught by TPROXY properly
> but MTU not respected?

Here is a tcpdump taken from my edge router. 2001:snip:9a0a is my
client machine. 2001:snip::1 is the LAN interface of the edge router
that this dump is from. I enabled v6 on my client and tried to access
"google.com" to get these results.

16:04:17.362562 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [S], seq 913579164, win 14400,
options [mss 1440,sackOK,TS val 61626579 ecr 0,nop,wscale 7], length 0
16:04:18.358639 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [S], seq 913579164, win 14400,
options [mss 1440,sackOK,TS val 61626829 ecr 0,nop,wscale 7], length 0
16:04:18.397759 IP6 den03s05-in-x11.1e100.net.www >
2001:snip:9a0a.53616: Flags [S.], seq 685180099, ack 913579165, win
14280, options [mss 1410,sackOK,TS val 1528504575 ecr
61626829,nop,wscale 6], length 0
16:04:18.397848 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [.], ack 1, win 113, options
[nop,nop,TS val 61626838 ecr 1528504575], length 0
16:04:18.398024 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [.], seq 1:1399, ack 1, win 113,
options [nop,nop,TS val 61626838 ecr 1528504575], length 1398
16:04:18.398159 IP6 2001:snip:1 > 2001:snip:9a0a: ICMP6, packet too
big, mtu 1280, length 1240
16:04:18.398181 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [P.], seq 1399:1742, ack 1, win
113, options [nop,nop,TS val 61626838 ecr 1528504575], length 343
16:04:18.443360 IP6 den03s05-in-x11.1e100.net.www >
2001:snip:9a0a.53616: Flags [.], ack 1, win 224, options [nop,nop,TS
val 1528504621 ecr 61626838,nop,nop,sack 1 {1399:1742}], length 0
16:04:18.630661 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [.], seq 1:1399, ack 1, win 113,
options [nop,nop,TS val 61626897 ecr 1528504621], length 1398
16:04:18.630839 IP6 2001:snip:1 > 2001:snip:9a0a: ICMP6, packet too
big, mtu 1280, length 1240
16:04:19.102673 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [.], seq 1:1399, ack 1, win 113,
options [nop,nop,TS val 61627015 ecr 1528504621], length 1398
16:04:19.102849 IP6 2001:snip:1 > 2001:snip:9a0a: ICMP6, packet too
big, mtu 1280, length 1240
16:04:20.046674 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [.], seq 1:1399, ack 1, win 113,
options [nop,nop,TS val 61627251 ecr 1528504621], length 1398
16:04:20.046851 IP6 2001:snip:1 > 2001:snip:9a0a: ICMP6, packet too
big, mtu 1280, length 1240
16:04:21.938682 IP6 2001:snip:9a0a.53616 >
den03s05-in-x11.1e100.net.www: Flags [.], seq 1:1399, ack 1, win 113,
options [nop,nop,TS val 61627724 ecr 1528504621], length 1398
16:04:21.938867 IP6 2001:snip:1 > 2001:snip:9a0a: ICMP6, packet too
big, mtu 1280, length 1240


Re: [squid-users] Squid 3.1 Client Source Port Identity Awareness

2012-10-23 Thread Eliezer Croitoru

On 10/23/2012 8:55 PM, alexander@heidelberg.de wrote:

Any help is appreciated:)





Best Regards





Alex

Take a peek at TPROXY.
if you can share your squid.conf you can get better help.
(notice that your email looks bad with lots of spaces)

Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] Squid 3.1 Client Source Port Identity Awareness

2012-10-23 Thread Alexander.Eck
Hi everyone,





is it possible to have squid use the same Source Port to connect to the Web=


server as the client uses to connect to squid ?





My problem is the following setup:





Various Citrix Server


URL Filtering with Identity Awareness


Squid 3.1 as Cache Proxy





I had to install a Terminal Server Identity Agent on every Citrix Server to=


 distinguish the users.





The Identity Agent assigns port ranges to every user, to distinguish them.








Problem is:


In my firewall logs i can see the identity of the user for the request from=


 the citrix server to the proxy (proxy is in the dmz). But i can't see the =


identity from the request from the proxy to the Internet.





My guess is, that this is because squid isn't using the same Source Port as=


 the client, or is not forwarding the Source Port.





Did anybody try something similiar and got it working ?  Is squid capable o=


f doing this or do i have an error in reasoning about my setup ?





Any help is appreciated :)





Best Regards





Alex




Re: [squid-users] squid 3.2.3 crashed with FATAL error

2012-10-23 Thread Ben

Hi,


Hi,

On 23/10/2012 8:10 p.m., Ben wrote:

Hi,


On 23/10/2012 5:07 a.m., Ben wrote:

Hi,

My squid 3.2.3 latest version getting restart automatically with 
error "FATAL: Bungled (null) line 192: icap_retry deny all". What 
could be reason behind this problem? How to resolve  it.?


Did you ./configure  using --enable-icap-client ?

Yes, i configured with this options.

Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-storeio=aufs,ufs' '--enable-removal-policies=lru,heap' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' 
'--enable-wccp' '--enable-wccp2' '--enable-kill-parent-hack' 
'--enable-http-violations' '--enable-async-io=128' 
'--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
'--enable-libcap' --enable-ltdl-convenience



Amos



since last day, there is no more entry for this fatal error. what 
does this error says?


I'm not exactly sure what the bungled is about. I've just patched 
latest 3.HEAD to explain "(null)" better. That means one of the 
default values built-in to Squid is broken.


This message is saying the default value for when you have nothing in 
your squid.conf about icap_retry is not able to be defined.



What do you mean by "since last day" ...  you have a new build that 
works? or you added icap_retry to the config and it works? or no 
changes and it just started working?




Yes, no changes and it just started working.

I just got some logs now,

cat /opt/squid-3.2.3/var/logs/cache.log | grep -i fatal
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all
FATAL: Bungled (null) line 192: icap_retry deny all

what do you suggest to resolve it?



One thing i noticed in 3.2.3, now there is no FATAL: dying issues 
which I faced in 3.1. series for which i had sent mail to squid users.



Amos

BR
Ben

BR
Ben


Re: [squid-users] TPROXY Timeouts on Select Websites

2012-10-23 Thread Eliezer Croitoru

On 10/23/2012 1:53 PM, Matthew Goff wrote:

I don't know if Squid had already processed the packets for re-writing
before Wireshark displays them or not, so I'll check a tcpdump at the
router itself to see where it originally directed the packet to before
my Squid box had any chance to mangle it.
Squid dosnt process ICMP packets on TPROXY.. so it's not really related 
to squid(so it seems to me).


Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] feature request: setting location of coordinator.ipc and kidx.ipc during runtime?

2012-10-23 Thread Rietzler, Markus (RZF, SG 324 / )
hi,

we want to use squid with smp workers. 
workers are running fine. now also logroate works (although not as expected. 
see my other mail "[squid-users] question of understanding: squid smp/workers 
and logfiles", works only with access_log for each worker not one single one).

now there is only one problem.

when we compile squid we use

./configure --prefix /default/path/to/squid

in our production environment squid lies under a different path (eg. 
/path/to/squid). we also use several instances of squid, etc. one internet, one 
intranet, one extranet etc. each one with its own directory structure like etc, 
run, log, cache etc. 

via squid.conf we can set every required path (log, log_file_daemon, icons, 
error, unlinkd etc) but not for the ipc-location. 

in src/ipc/Port.cc the location is hardcoded:

const char Ipc::coordinatorAddr[] = DEFAULT_STATEDIR "/coordinator.ipc";
const char Ipc::strandAddrPfx[] = DEFAULT_STATEDIR "/kid";

I can patch src/ipc/Makefile to have localstatedir point to a other dir then 
/default/path/to/squid/var (that's how localstatedir will be expanded in the 
Makefile). but this is not really what we want. we want to be able to have the 
location set via squid.conf or environment var during runtime. 

we tried to use something like

const char Ipc::coordinatorAddr[] = Config.coredump_dir "/coordinator.ipc";

but then we get compile erros.

is it possible to create some patch to have to set the location of ipc-files 
during runtime. 

thanxs



Markus Rietzler

Rechenzentrum der Finanzverwaltung

Tel: 0211/4572-2130


Re: [squid-users] Squid 3.x debian-squeeze with enable-ssl

2012-10-23 Thread Bartosz.C
In build-dep dependiencies maintainers did not included libssl-dev and
devscripts, so the building with ssl support is imposible without
them.
Fortunatelly not only me has such problem so I have found the answer:
http://www.banym.de/linux/build-squid-with-enable-ssl-on-debian
Regards.
Bartosz.


[squid-users] question of understanding: squid smp/workers and logfiles

2012-10-23 Thread Rietzler, Markus (RZF, SG 324 / )
we are using squid 3.2.3 with smp workers.

in http://wiki.squid-cache.org/Features/SmpScale it is written that workers can 
share logs.
in the docu it is also mentioned, that one should upgrade to the faster and 
better logfile daemon.

we are using the following (log) config:

workers 3
logfile_daemon /path/to/log_file_daemon
logfile_rotate 14
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %
Rechenzentrum der Finanzverwaltung

Tel: 0211/4572-2130




Re: [squid-users] TPROXY Timeouts on Select Websites

2012-10-23 Thread Matthew Goff
On Mon, Oct 22, 2012 at 10:40 PM, Amos Jeffries  wrote:
> If I am reading that correctly you are saying the ICMPv6 'too big' packets
> are not going to Squid, but to the client machine?

I will have to try and run a tcpdump on the edge router itself when I
get off work today, but the Wireshark from my Squid box showed the
ICMPv6 'too big' originating from my edge router's LAN port with a
destination address of my client machine.

> Which would make it a TPROXY bug, since the outbound connection from Squid
> is where the MTU should be lowered at the kernel level.

Is there a Squid config option to modify MTU? I didn't see anything
except the PMTU discovery, but perhaps it would be a beneficial
addition to allow specifying MTU? I'm not even certain how much work
that would be as I haven't looked through the Squid src myself; just a
thought.

>  Or are they *addressed* to the client machine and caught by TPROXY properly
> but MTU not respected?

I don't know if Squid had already processed the packets for re-writing
before Wireshark displays them or not, so I'll check a tcpdump at the
router itself to see where it originally directed the packet to before
my Squid box had any chance to mangle it.


Re: [squid-users] squid 3.2.3 crashed with FATAL error

2012-10-23 Thread Ben

Hi,

On 23/10/2012 8:10 p.m., Ben wrote:

Hi,


On 23/10/2012 5:07 a.m., Ben wrote:

Hi,

My squid 3.2.3 latest version getting restart automatically with 
error "FATAL: Bungled (null) line 192: icap_retry deny all". What 
could be reason behind this problem? How to resolve  it.?


Did you ./configure  using --enable-icap-client ?

Yes, i configured with this options.

Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-storeio=aufs,ufs' '--enable-removal-policies=lru,heap' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' 
'--enable-wccp' '--enable-wccp2' '--enable-kill-parent-hack' 
'--enable-http-violations' '--enable-async-io=128' 
'--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
'--enable-libcap' --enable-ltdl-convenience



Amos



since last day, there is no more entry for this fatal error. what 
does this error says?


I'm not exactly sure what the bungled is about. I've just patched 
latest 3.HEAD to explain "(null)" better. That means one of the 
default values built-in to Squid is broken.


This message is saying the default value for when you have nothing in 
your squid.conf about icap_retry is not able to be defined.



What do you mean by "since last day" ...  you have a new build that 
works? or you added icap_retry to the config and it works? or no 
changes and it just started working?




Yes, no changes and it just started working.

One thing i noticed in 3.2.3, now there is no FATAL: dying issues which 
I faced in 3.1. series for which i had sent mail to squid users.



Amos

BR
Ben


[squid-users] Re: Squid 3.2.2 + localhost

2012-10-23 Thread alberto.desi
Yes, I've read the mail but I think that it is better to post it here... ;-)
The main problem is that if I rewrite the $url (localhost [::1], or
[5001::52] [3001::52] that are the addresses of the interfaces) it doesn't
work. If I rewrite $url with "302:" code in front it works... but the
behavior of the system is completely different.
Replying to your requests, I'm working on 4 virtual machine called origin,
core, mobile1, mobile2. In the origin and mobile I have apache servers
running... those are my caches!!! When I write localhost, I want to redirect
to the apache link

example:
I receive GET for http://[6001::101]/Big/big1.avi and I want to rewrite it
like http://[5001::52]/Big/big1.avi, that is the link to the apache in the
same machine where Squid is installed. This is not working. But if I
redirect to another machine http://[4001::52]/Big/big1.aviwith apache it
works.

OK?

*[rewriter_code]*
#!/usr/bin/perl

use warnings;
use strict;
use Fcntl ':flock';

require '/home/alberto/NodeConfig.pm';#
dir/.../NodeConfig.pm
my $dirDB = ($NodeConfig::dir_DB);#
directory local database
my $db_name = ($NodeConfig::name_DB); # name of
local database
my $node_address = ($NodeConfig::node_address);   # MAR's
address
my $DM_address = ($NodeConfig::DM_address);   # DM's
address
my $dir_apache = ($NodeConfig::dir_Apache);   #
directory of contents (Apache server)
my $dir_DM = ($NodeConfig::dir_DM);   #
directory where there is the DM_req.pl in DM
my $rootpwd = ($NodeConfig::root_pwd);# password
for root access (to send the request to DM)

$|=1;

#-
# PARAMETERS (modifying only ip address oCDN)


while (<>) {

  my @params_http = split; 
# parameters of http request
   my $url = $params_http[0];   # url of
http request
   my $ip_client = $params_http[1]; # ip client oh http
request

  my $absTime = time(); 
   
# absolute time in seconds

  my $db_name = ($NodeConfig::name_DB);
  my $node_address = ($NodeConfig::node_address);
  my @copie;

#- REWRITE URL SQUID FUNCTION

# Check if there is the content inside he cache:
  # if YES --> Go directly to MAR's cache
  # if NO --> Forward the request to DM and wait the
best cache or Origin path


   open(LIST1, "< $dirDB"."$db_name"); # open local database for
READ
   flock ( LIST1, LOCK_SH );
   my @copieS=;
   flock ( LIST, LOCK_UN );
   close(LIST1);
   my $c;
   for $ind(0..$#copieS) {
  my @values = split(';', $copieS[$ind]);
  my $original =  $values[0];
  my $copy = $values[1];
  my $iT1 = $values[2];

  # seeking in the datbase if the content is in the
cache
  if (($url eq $original)and($iT1 eq "Y")) {

   
my @val1 = split('/', $url);
 if (-e
"$dir_apache"."$val1[3]"."/"."$val1[4]") {

   
my $newURL = "$val1[0]//$node_address/$val1[3]/$val1[4]";

print 
"$newURL\n";
#print 
"302:"."$newURL\n";
exit;
 }
  }
  }
# request to DM for best position of content (Origin or 
others MARs)
my $req = 
request($DM_address,$dir_DM,$url,$node_address);

print "$req\n";
}


#--- END rewriteurl.pl
---


# Soubroutine to Forward the request to
DM ---

# subroutine to send the request for best position to DM (ssh call)
sub request {

   my $DM_address = $_[0];
   my $dir_DM = $_[1];
   my $url = $_[2];
   my $node_address = $_[3];


   my $length = length($DM_address);
   my $DM_addressSSH = substr($DM_address,1,$length-2);
   my $req_DM = "sshpass -p '$rootpwd' ssh 

[squid-users] Squid 3.x debian-squeeze with enable-ssl

2012-10-23 Thread Bartosz.C
 I'm trying to do succesfull deb package in Debian Squeeze with ssl support.
 When I add line
 --enable-ssl \
 in ./squid3-3.1.6/debian/rules I have such error without this line
 (...) ./squid3-3.1.6# debuild -us -uc -b
 works fine. What is wrong?

make[3]: Entering directory `/root/kompilacje/1/squid3-3.1.6/src'
Making all in base
make[4]: Entering directory `/root/kompilacje/1/squid3-3.1.6/src/base'
/bin/bash ../../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H
 -I../.. -I../../include -I../../src -I../../include -I/usr/include
-I/usr/include/libxml2  -I/usr/include/libxml2 -Wall -Wpointer-arith
-Wwrite-strings -Wcomments -Werror  -D_REENTRANT -m64 -g -O2 -g -Wall
-O2 -c -o AsyncCall.lo AsyncCall.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include
-I../../src -I../../include -I/usr/include -I/usr/include/libxml2
-I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings
-Wcomments -Werror -D_REENTRANT -m64 -g -O2 -g -Wall -O2 -c
AsyncCall.cc  -fPIC -DPIC -o .libs/AsyncCall.o
In file included from ../../src/squid.h:272,
 from AsyncCall.cc:5:
../../src/ssl_support.h:58: error: expected constructor, destructor,
or type conversion before ‘*’ token
In file included from ../../src/squid.h:272,
 from AsyncCall.cc:5:
../../src/ssl_support.h:61: error: expected constructor, destructor,
or type conversion before ‘*’ token
../../src/ssl_support.h:74: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:74: error: ‘ssl’ was not declared in this scope
../../src/ssl_support.h:77: error: typedef ‘SSLGETATTRIBUTE’ is
initialized (use decltype instead)
../../src/ssl_support.h:77: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:77: error: expected primary-expression before ‘,’ token
../../src/ssl_support.h:77: error: expected primary-expression before ‘const’
../../src/ssl_support.h:80: error: ‘SSLGETATTRIBUTE’ does not name a type
../../src/ssl_support.h:83: error: ‘SSLGETATTRIBUTE’ does not name a type
../../src/ssl_support.h:86: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:86: error: ‘ssl’ was not declared in this scope
../../src/ssl_support.h:89: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:89: error: ‘ssl’ was not declared in this scope
In file included from ../../src/squid.h:318,
 from AsyncCall.cc:5:
../../src/structs.h:615: error: ISO C++ forbids declaration of
‘SSL_CTX’ with no type
../../src/structs.h:615: error: expected ‘;’ before ‘*’ token
../../src/structs.h:960: error: ISO C++ forbids declaration of
‘SSL_CTX’ with no type
../../src/structs.h:960: error: expected ‘;’ before ‘*’ token
../../src/structs.h:961: error: ISO C++ forbids declaration of
‘SSL_SESSION’ with no type
../../src/structs.h:961: error: expected ‘;’ before ‘*’ token
make[4]: *** [AsyncCall.lo] Error 1
make[4]: Leaving directory `/root/kompilacje/1/squid3-3.1.6/src/base'
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory `/root/kompilacje/1/squid3-3.1.6/src'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/root/kompilacje/1/squid3-3.1.6/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/kompilacje/1/squid3-3.1.6'
make: *** [debian/stamp-makefile-build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2
debuild: fatal error at line 1325:
dpkg-buildpackage -rfakeroot -D -us -uc -b failed


Re: [squid-users] Squid 3.2.2 + localhost

2012-10-23 Thread Amos Jeffries

On 23/10/2012 10:21 p.m., alberto.desi wrote:

Hello guys,
I'm becoming crazy!
I am a student and I am working with Squid for a project about content
delivery networks.
Setting up the system (all in IPv6) I have found out some issues.

To set up it I followed http://wiki.squid-cache.org/Features/Tproxy4 because
I need a trasparent proxy and for IPv6 this the way. The problem is that
passing through I have to start a rewrite_url_program, written in perl. All
is working, I pass, the perl file is started and so on. But I have to
rewrite the url or to a machine called origin or to the local machine, where
I have installed Squid. If I rewrite the url to go to the origin, is ok.
When I rewrite the link to go to the local cache it seems like blocked. I am
using this iptables rules:


Sounds familiar. Did you get my response to your earlier request ~10 
days ago?


Some questions:
 * a machine *called* "origin" or an "origin server" ?
 * a machine called "localhost" or the "local machine" ?

The difference in these things is rather important.

A copy of the perl re-writer would be helpful in identifying what it does.

Also, a copy of the cache.log output when squid is run with 
"debug_options 11,2". The line containing "HTTP Server REQUEST", the one 
above it, and the headers below which are generated when sending your 
re-written request would also be very useful.


Amos



ip6tables -t mangle -N DIVERT
ip6tables -t mangle -A DIVERT -j MARK --set-mark 1
ip6tables -t mangle -A DIVERT -j ACCEPT
ip6tables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
ip6tables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark
0x1/0x1 --on-port 3129

and I set up also

ip -f inet6 rule add fwmark 1 lookup 100
ip -f inet6 route add local default dev eth0 table 100  <= changed dev with
arriving interface (for me eth2)

It seems that the packets redirected to the cache of the machine with squid
are in an internal loop.
Can you help me to understand why, and maybe how to find a solution?
This mechanism was working with the IPv4 rules for the trasparent proxy
(with nat chain), but here with IPv6 the things are different!!!

Thanks a lot

P.S.
all the machines are running
- Linux Ubuntu, kernel 2.6.39
- iptables v. 1.4.16.2
- Squid 3.2.2


*[squid.conf]* file
#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl localnet src 3001::/64
acl localnet src 5001::/64

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access allow all
# And finally deny all other access to this proxy
#http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 3129 tproxy

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
visible_hostname machine1
url_rewrite_program /home/alberto/rewriteurl.pl





Re: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-23 Thread Amos Jeffries

On 23/10/2012 9:17 p.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:

This problem starts randomly and recovers randomly too (after restarting, 
rebooting, etc). We restart httpd and it works fine 2 min. and the problem 
starts again, reboot the whole server and again appears the problem, sometimes, 
the problem disappears after removing cache directory and re-creating it.

Do you know what could cause socket() and dup2() operations start to fail?



"

   The /socket/() function shall fail if:

   [EAFNOSUPPORT]
   The implementation does not support the specified address family.
   [EMFILE]
   No more file descriptors are available for this process.
   [ENFILE]
   No more file descriptors are available for the system.
   [EPROTONOSUPPORT]
   The protocol is not supported by the address family, or the
   protocol is not supported by the implementation.
   [EPROTOTYPE]
   The socket type is not supported by the protocol.

   The /socket/() function may fail if:

   [EACCES]
   The process does not have appropriate privileges.
   [ENOBUFS]
   Insufficient resources were available in the system to perform
   the operation.
   [ENOMEM]
   Insufficient memory was available to fulfill the request.


"

Being squid-3.0 the address family errors do not apply. The rest of the 
system problems may still apply though.


Amos


[squid-users] Squid 3.2.2 + localhost

2012-10-23 Thread alberto.desi
Hello guys,
I'm becoming crazy!
I am a student and I am working with Squid for a project about content
delivery networks.
Setting up the system (all in IPv6) I have found out some issues.

To set up it I followed http://wiki.squid-cache.org/Features/Tproxy4 because
I need a trasparent proxy and for IPv6 this the way. The problem is that
passing through I have to start a rewrite_url_program, written in perl. All
is working, I pass, the perl file is started and so on. But I have to
rewrite the url or to a machine called origin or to the local machine, where
I have installed Squid. If I rewrite the url to go to the origin, is ok. 
When I rewrite the link to go to the local cache it seems like blocked. I am
using this iptables rules:

ip6tables -t mangle -N DIVERT
ip6tables -t mangle -A DIVERT -j MARK --set-mark 1
ip6tables -t mangle -A DIVERT -j ACCEPT
ip6tables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
ip6tables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark
0x1/0x1 --on-port 3129

and I set up also

ip -f inet6 rule add fwmark 1 lookup 100
ip -f inet6 route add local default dev eth0 table 100  <= changed dev with
arriving interface (for me eth2) 

It seems that the packets redirected to the cache of the machine with squid
are in an internal loop.
Can you help me to understand why, and maybe how to find a solution?
This mechanism was working with the IPv4 rules for the trasparent proxy
(with nat chain), but here with IPv6 the things are different!!!

Thanks a lot

P.S.
all the machines are running
- Linux Ubuntu, kernel 2.6.39
- iptables v. 1.4.16.2
- Squid 3.2.2


*[squid.conf]* file
#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl localnet src 3001::/64
acl localnet src 5001::/64

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access allow all
# And finally deny all other access to this proxy
#http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 3129 tproxy

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
visible_hostname machine1
url_rewrite_program /home/alberto/rewriteurl.pl



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-2-2-localhost-tp4657098.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-23 Thread RODRIGUEZ CEBERIO, Iñigo
This problem starts randomly and recovers randomly too (after restarting, 
rebooting, etc). We restart httpd and it works fine 2 min. and the problem 
starts again, reboot the whole server and again appears the problem, sometimes, 
the problem disappears after removing cache directory and re-creating it.

Do you know what could cause socket() and dup2() operations start to fail?

Thanks, Inigo

-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Enviado el: martes, 23 de octubre de 2012 10:07
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] "Reserved number of file descriptors" sudden increase

On 23/10/2012 7:53 p.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:
> Thank you for replying so quickly. I'll upgrade my squid.
>
> However, in this case we have in use 2673 from 4096 and the squid is stuck 
> because of the reserved number of file descriptors rises from 100 to 1400.

Oh right yes, Squid will start limiting inbound accepted connections when it 
reaches the reserved limit.

The Reserved FD starts at 100 default value and ONLY increases when
socket() and dup2() operations which create new sockets start to fail. 
When that happens it is a strong sign that the OS cannot handle that many 
sockets, and Squid limits itself to reserve the unused ones.

I don't see any code to reduce the number - which worries me. Maybe you hit a 
fluke occurance of socket() errors and are now stuck with a large reserved set.

Amos

>
> The normal situation is
> File descriptor usage for squid:
>Maximum number of file descriptors:   4096
>Largest file desc currently in use:   3193
>Number of file desc currently in use: 2673
>Files queued for open:   0
>Available number of file descriptors: 1423
>Reserved number of file descriptors:   100
>Store Disk files open:   1
>
> When the problem starts the only change is
>
> Reserved number of file descriptors:   100 -> 1424
>
> Regards, Inigo.
>
>
> -Mensaje original-
> De: Amos Jeffries [mailto:squ...@treenet.co.nz] Enviado el: martes, 23 
> de octubre de 2012 5:21
> Para: squid-users@squid-cache.org
> Asunto: Re: [squid-users] "Reserved number of file descriptors" sudden 
> increase
>
> On 23/10/2012 3:47 a.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:
>> Hello,
>>
>> I'm running squid 3.0.STABLE13 on a CentOS 5.2.
> Please upgrade, your Squid is no longer suported. Current Squid release is 
> version 3.2.3.
>
>>It works normally and suddenly it colapsed. In the cache.log appears 
>> messages telling run out of file descriptors. Using squidclient I can see a 
>> change of this parameter, "Reserved number of file descriptors", from 100 to 
>> 1424. Here it is the squidclient info about FD:
>>
>> File descriptor usage for squid:
>>   Maximum number of file descriptors:   4096
>>   Largest file desc currently in use:   3193
>>   Number of file desc currently in use: 2673
>>   Files queued for open:   0
>>   Available number of file descriptors: 1423
>>   Reserved number of file descriptors:  1424
>>   Store Disk files open:   1
>>
>> Why does that parameter rise from 100 to 1400 in just few seconds? What's 
>> going on? Any piece of advise?
> 1400 does not matter. The 2673 is more important - this is number of FD 
> currently open and in use.
>
> It can raise in three situations:
>1) scanning the disk cache in a "DIRTY" scan to rebuild the index file by 
> file. Requires opening every file on disk and can consume hundreds of FD at 
> once for the one process.
>
>2) receiving lots of client traffic. Might be a normal peak in traffic, a 
> DoS, or a broken client hammering away repeating a request (usually seen with 
> auth rejecteions).
>
>3) a forwarding loop, where Squid is processing a request which instructs 
> it to connect to itself as upstream. This is best prevented by configuring 
> "via on".
>
> Amos



Re: [squid-users] "Reserved number of file descriptors" sudden increase

2012-10-23 Thread Amos Jeffries

On 23/10/2012 7:53 p.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:

Thank you for replying so quickly. I'll upgrade my squid.

However, in this case we have in use 2673 from 4096 and the squid is stuck 
because of the reserved number of file descriptors rises from 100 to 1400.


Oh right yes, Squid will start limiting inbound accepted connections 
when it reaches the reserved limit.


The Reserved FD starts at 100 default value and ONLY increases when 
socket() and dup2() operations which create new sockets start to fail. 
When that happens it is a strong sign that the OS cannot handle that 
many sockets, and Squid limits itself to reserve the unused ones.


I don't see any code to reduce the number - which worries me. Maybe you 
hit a fluke occurance of socket() errors and are now stuck with a large 
reserved set.


Amos



The normal situation is
File descriptor usage for squid:
   Maximum number of file descriptors:   4096
   Largest file desc currently in use:   3193
   Number of file desc currently in use: 2673
   Files queued for open:   0
   Available number of file descriptors: 1423
   Reserved number of file descriptors:   100
   Store Disk files open:   1

When the problem starts the only change is

Reserved number of file descriptors:   100 -> 1424

Regards, Inigo.


-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz]
Enviado el: martes, 23 de octubre de 2012 5:21
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] "Reserved number of file descriptors" sudden increase

On 23/10/2012 3:47 a.m., "RODRIGUEZ CEBERIO, Iñigo" wrote:

Hello,
   
I'm running squid 3.0.STABLE13 on a CentOS 5.2.

Please upgrade, your Squid is no longer suported. Current Squid release is 
version 3.2.3.


   It works normally and suddenly it colapsed. In the cache.log appears messages telling 
run out of file descriptors. Using squidclient I can see a change of this parameter, 
"Reserved number of file descriptors", from 100 to 1424. Here it is the 
squidclient info about FD:
   
File descriptor usage for squid:

  Maximum number of file descriptors:   4096
  Largest file desc currently in use:   3193
  Number of file desc currently in use: 2673
  Files queued for open:   0
  Available number of file descriptors: 1423
  Reserved number of file descriptors:  1424
  Store Disk files open:   1
   
Why does that parameter rise from 100 to 1400 in just few seconds? What's going on? Any piece of advise?

1400 does not matter. The 2673 is more important - this is number of FD 
currently open and in use.

It can raise in three situations:
   1) scanning the disk cache in a "DIRTY" scan to rebuild the index file by 
file. Requires opening every file on disk and can consume hundreds of FD at once for the 
one process.

   2) receiving lots of client traffic. Might be a normal peak in traffic, a 
DoS, or a broken client hammering away repeating a request (usually seen with 
auth rejecteions).

   3) a forwarding loop, where Squid is processing a request which instructs it to 
connect to itself as upstream. This is best prevented by configuring "via on".

Amos




[squid-users] 3.3.0.1 warning on reload - max_filedescriptors disabled

2012-10-23 Thread Amm
Hello all,


I am trying out 3.3.0.1 beta on Fedora 16 64 bit.(kernel 3.4.11-1.fc16.x86_64 
#1 SMP)

I have created RPM file using same spec file and patches as 3.2.1 (which I have 
been using from a month without any issues).

In squid.conf, I have "max_filedescriptors 4096"

When I start squid (3.3.0.1) using "systemctl start squid.service"

I see this in log file:
2012/10/23 12:52:05 kid1| With 16384 file descriptors available

So I am not sure why it is showing 16384 instead of 4096

In 3.2.1 with exactly same squid.conf, it was showing:
2012/10/23 08:36:29 kid1| With 4096 file descriptors available


Secondly when i reload squid (3.3.0.1) using "systemctl reload squid.service"

Log file shows this:
2012/10/23 11:09:01 kid1| WARNING: max_filedescriptors disabled. Operating 
System setrlimit(RLIMIT_NOFILE) is missing.

I want to make sure that even after squid reloads, it atleast maintain 4096 as 
max and does not reduce to 1024 or so.



Thirdly, in an unrelated log entry, just now I noticed this:
2012/10/23 12:51:59 kid1| assertion failed: forward.cc:217: "err"
2012/10/23 12:52:05 kid1| Starting Squid Cache version 3.3.0.1 for 
x86_64-unknown-linux-gnu...


It appears that squid crashed and restarted. But there is not much information 
on why? May be something in forward.cc:217

So just reporting - please check.

Thank you,

Amm.



Re: [squid-users] squid 3.2.3 crashed with FATAL error

2012-10-23 Thread Amos Jeffries

On 23/10/2012 8:10 p.m., Ben wrote:

Hi,


On 23/10/2012 5:07 a.m., Ben wrote:

Hi,

My squid 3.2.3 latest version getting restart automatically with 
error "FATAL: Bungled (null) line 192: icap_retry deny all". What 
could be reason behind this problem? How to resolve  it.?


Did you ./configure  using --enable-icap-client ?

Yes, i configured with this options.

Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-storeio=aufs,ufs' '--enable-removal-policies=lru,heap' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' 
'--enable-wccp' '--enable-wccp2' '--enable-kill-parent-hack' 
'--enable-http-violations' '--enable-async-io=128' 
'--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
'--enable-libcap' --enable-ltdl-convenience



Amos



since last day, there is no more entry for this fatal error. what does 
this error says?


I'm not exactly sure what the bungled is about. I've just patched latest 
3.HEAD to explain "(null)" better. That means one of the default values 
built-in to Squid is broken.


This message is saying the default value for when you have nothing in 
your squid.conf about icap_retry is not able to be defined.



What do you mean by "since last day" ...  you have a new build that 
works? or you added icap_retry to the config and it works? or no changes 
and it just started working?



Amos


Re: [squid-users] squid 3.2.3 crashed with FATAL error

2012-10-23 Thread Ben

Hi,


On 23/10/2012 5:07 a.m., Ben wrote:

Hi,

My squid 3.2.3 latest version getting restart automatically with 
error "FATAL: Bungled (null) line 192: icap_retry deny all". What 
could be reason behind this problem? How to resolve  it.?


Did you ./configure  using --enable-icap-client ?

Yes, i configured with this options.

Squid Cache: Version 3.2.3
configure options:  '--prefix=/opt/squid-3.2' 
'--enable-storeio=aufs,ufs' '--enable-removal-policies=lru,heap' 
'--enable-cachemgr-hostname=CACHE-Engine' '--enable-linux-netfilter' 
'--enable-follow-x-forwarded-for' '--disable-auth' '--disable-ipv6' 
'--enable-zph-qos' '--with-large-files' '--enable-snmp' '--enable-wccp' 
'--enable-wccp2' '--enable-kill-parent-hack' '--enable-http-violations' 
'--enable-async-io=128' '--enable-err-languages=English' 
'--enable-default-err-language=English' '--enable-icap-client' 
'--enable-libcap' --enable-ltdl-convenience



Amos



since last day, there is no more entry for this fatal error. what does 
this error says?


Ben