[squid-users] can't compile 3.1.0.15

2009-12-05 Thread Landy Landy
I'm trying to upgrade from 3.1.0.14 to 3.1.0.15 but I'm getting an error when 
compiling: 

c MessageRep.cc  -fPIC -DPIC -o .libs/MessageRep.o
cc1plus: warnings being treated as errors
MessageRep.cc: In member function âlibecap::Name 
Adaptation::Ecap::FirstLineRep::protocol() constâ:
MessageRep.cc:120: warning: enumeration value âPROTO_ICYâ not handled in switch
make[4]: *** [MessageRep.lo] Error 1
make[4]: Leaving directory `/workingdir/squid-3.1.0.15/src/adaptation/ecap'
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory `/workingdir/squid-3.1.0.15/src/adaptation'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/workingdir/squid-3.1.0.15/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/workingdir/squid-3.1.0.15/src'
make: *** [all-recursive] Error 1


These are the options I'm using:
./configure --prefix=/usr/local/squid \
--sysconfdir=/etc/squid3.1 \
--enable-delay-pools \
--enable-kill-parent-hack \
--disable-htcp \
--enable-default-err-language=Spanish \
--enable-linux-netfilter \
--disable-ident-lookups \
--localstatedir=/var/log/squid3.1 \
--enable-stacktraces \
--with-default-user=proxy \
--with-large-files \
--enable-icap-client \
--enable-ecap \
--enable-async-io \
--enable-storeio="aufs" \
--enable-removal-policies="heap,lru" \
--with-maxfd=16384


These options worked with 3.1.0.14 don't know why is not compiling 3.1.0.15.








Re: [squid-users] Squid at 100% CPU with 10 minutes period

2009-12-05 Thread Guy Bashkansky
Amos, thanks for the links.  I've looked at the mailing list and the
open bugs, and could not find something similar to what I see, with
the 10 minutes period.

We're using a customized version of Squid 2.4 STABLE6, and it's not in
my power to upgrade it to any later version...  It runs on FreeBSD
6.2-RELEASE-p9 amd64 servers.

I know I need some profiling/debugging information to determine where
CPU spends its cycles, but on these servers most usual tools are
either absent or not working very well:
There's no 'oprofile' for FreeBSD, 'pmcstat' fails to run (no lib?),
'gprof' does not give info beyond parseConfigFile() even in my custom
profiling-enabled version with -N, 'gdb' does not recognize debug
info, and 'strace' is not installed.

I've found 'truss' command to be working and traced system calls made
by the squid process, trying to recognize some patterns -- noticed
that during CPU load spike write() sometimes returns EPIPE, 'Broken
pipe'.

Does my version (2.4 STABLE6) ring any bells?


On Sat, Dec 5, 2009 at 3:33 AM, Amos Jeffries  wrote:
>
> Guy Bashkansky wrote:
>>
>> Amos,
>>
>> Where can I find a list of the solved "Squid uses 100% CPU" bugs?  It
>> would help me figure out which one I may be experiencing.
>
> Sorry for being gruff.
>
> The mailing list queries:
>  http://squid.markmail.org/search/?q=squid+uses+100%25+cpu
>
> The remaining open bugs mentioning
> http://bugs.squid-cache.org/buglist.cgi?quicksearch=100%25+CPU
>
> The closed ones are hard to find as they have been re-named to say what the 
> actual problems was.
>
>
> Which release and version of squid is this?
>
>>
>> When I attach gdb to the running squid process, it does not find
>> debugging info, despite compiling with -g flag (maybe a gdb setup
>> problem, nm does show symbols).
>>
>> When I try to use gprof for profiling (squid compiled with -pg),
>> squid.gmon contains only the initial configuration functions
>> profiling.  I stop squid with -k shutdown signal so it exits normally
>> (which is necessary to produce squid.gmon).
>
> Squid needs to be started with -N to prevent forking a child daemon that does 
> all the actual work. The main process can then be traced.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
>  Current Beta Squid 3.1.0.15


[squid-users] squid failing every 5 minutes

2009-12-05 Thread Landy Landy
Hello.

I'm reposting about squid stoping and restarting every 5 minutes. I didn't get 
any more responses about it last time I posted and havent find a solution to 
the problem.

Here's part of syslog:

Dec  5 21:50:38 optimum-router squid[4769]: Squid Parent: child process 11973 
exited due to signal 6 with status 0
Dec  5 21:50:41 optimum-router squid[4769]: Squid Parent: child process 12061 
started
Dec  5 21:55:35 optimum-router squid[4769]: Squid Parent: child process 12061 
exited due to signal 6 with status 0
Dec  5 21:55:38 optimum-router squid[4769]: Squid Parent: child process 12183 
started
Dec  5 22:00:35 optimum-router squid[4769]: Squid Parent: child process 12183 
exited due to signal 6 with status 0
Dec  5 22:00:38 optimum-router squid[4769]: Squid Parent: child process 12252 
started
Dec  5 22:05:36 optimum-router squid[4769]: Squid Parent: child process 12252 
exited due to signal 6 with status 0
Dec  5 22:05:39 optimum-router squid[4769]: Squid Parent: child process 12313 
started
Dec  5 22:07:58 optimum-router squid[12391]: Squid Parent: child process 12393 
started
Dec  5 22:10:36 optimum-router squid[12391]: Squid Parent: child process 12393 
exited due to signal 6 with status 0
Dec  5 22:10:39 optimum-router squid[12391]: Squid Parent: child process 12522 
started
Dec  5 22:15:41 optimum-router squid[12391]: Squid Parent: child process 12522 
exited due to signal 6 with status 0
Dec  5 22:15:44 optimum-router squid[12391]: Squid Parent: child process 31594 
started
Dec  5 22:20:35 optimum-router squid[12391]: Squid Parent: child process 31594 
exited due to signal 6 with status 0
Dec  5 22:20:38 optimum-router squid[12391]: Squid Parent: child process 5466 
started

Dec  5 22:25:36 optimum-router squid[12391]: Squid Parent: child process 5466 
exited due to signal 6 with status 0
Dec  5 22:25:39 optimum-router squid[12391]: Squid Parent: child process 13301 
started


And here's cache.log 

2009/12/05 22:15:55| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 14: (2) No such file or directory
2009/12/05 22:15:55| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 15: (2) No such file or directory
(squid)(death+0x4b)[0x818b8cb]
[0xb7fb3420]
/usr/local/lib/ecap_adapter_gzip.so(_ZN7Adapter7Xaction17noteVbContentDoneEb+0x61)[0xb7aba411]
(squid)(_ZN10Adaptation4Ecap10XactionRep23noteBodyProductionEndedE8RefCountI8BodyPipeE+0x61)[0x81fdd61]
(squid)(_ZN12UnaryMemFunTI12BodyConsumer8RefCountI8BodyPipeEE6doDialEv+0x48)[0x80f1208]
(squid)(_ZN9JobDialer4dialER9AsyncCall+0x51)[0x8199661]
(squid)(_ZN10AsyncCallTI18BodyConsumerDialerE4fireEv+0x18)[0x80f1138]
(squid)(_ZN9AsyncCall4makeEv+0x17b)[0x819888b]
(squid)(_ZN14AsyncCallQueue8fireNextEv+0xf9)[0x819b4a9]
(squid)(_ZN14AsyncCallQueue4fireEv+0x28)[0x819b5a8]
(squid)(_ZN9EventLoop13dispatchCallsEv+0x13)[0x81096c3]
(squid)(_ZN9EventLoop7runOnceEv+0xf6)[0x81098b6]
(squid)(_ZN9EventLoop3runEv+0x28)[0x8109988]
(squid)(_Z9SquidMainiPPc+0x4b5)[0x8151e75]
(squid)(main+0x27)[0x81522f7]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xc8)[0xb7cfaea8]
(squid)(__gxx_personality_v0+0x155)[0x80bd081]
FATAL: Received Segment Violation...dying.
2009/12/05 22:20:35| storeDirWriteCleanLogs: Starting...



  


RE: [squid-users] squid ceasing to function when interface goes down

2009-12-05 Thread tyler


>  Original Message 
> Subject: Re: [squid-users] squid ceasing to function when interface
> goes down
> From: Amos Jeffries 
> Date: Sat, December 05, 2009 5:02 pm
> To: squid-users@squid-cache.org
> 
> 
> 
> Two things to try:
> 
>   1) server_persistent_connections off. This should prevent Squid using 
> any connections to the parent which are possibly hung.
Very good.  Thank you.
 
>   2) Make sure ICMP is built in and going so Squid can actively measure 
> the network link to the parent. This is a simpler up-detection 
> replacement for ICP which you have disabled by using "no-query".
I've got ICMP built in now, but don't see any relevant options for it.
query_icmp USES icp, which I doubt is what you meant...

> If you are willing to participate in a small experiment. Would you mind 
> updating that Squid to the 3.1 beta?
I'd rather not experiment too freely on remote systems that are hard to 
get to in the winter, but thanks for asking.



Re: [squid-users] squid ceasing to function when interface goes down

2009-12-05 Thread Amos Jeffries

ty...@marieval.com wrote:



 Original Message 
Subject: Re: [squid-users] squid ceasing to function when interface
goes down
From: Amos Jeffries 
Date: Sat, December 05, 2009 5:27 am
To: squid-users@squid-cache.org


I have more information on the errors...  it seems that whenever this
happens, it stops trying to access the upstream proxy at 10.0.0.1, and
just tries everything direct, which of course fails since I have
disallowed direct. I want it to use the VPN.  And it can -- the instant
I reset it, everything's fine for a few hours until it decides to start
refusing to forward to my default cache_peer again.  Here's my config: 
http://pastebin.ca/1703265





Two things to try:

 1) server_persistent_connections off. This should prevent Squid using 
any connections to the parent which are possibly hung.


 2) Make sure ICMP is built in and going so Squid can actively measure 
the network link to the parent. This is a simpler up-detection 
replacement for ICP which you have disabled by using "no-query".



If you are willing to participate in a small experiment. Would you mind 
updating that Squid to the 3.1 beta?
That will also give you access to the cache_peer connect-fail-limit=N 
configuration option which reduces the time to detect down links to N 
instead of 10.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


RE: [squid-users] squid ceasing to function when interface goes down

2009-12-05 Thread tyler


>  Original Message 
> Subject: Re: [squid-users] squid ceasing to function when interface
> goes down
> From: Amos Jeffries 
> Date: Sat, December 05, 2009 5:27 am
> To: squid-users@squid-cache.org

I have more information on the errors...  it seems that whenever this
happens, it stops trying to access the upstream proxy at 10.0.0.1, and
just tries everything direct, which of course fails since I have
disallowed direct. I want it to use the VPN.  And it can -- the instant
I reset it, everything's fine for a few hours until it decides to start
refusing to forward to my default cache_peer again.  Here's my config: 
http://pastebin.ca/1703265




RE: [squid-users] squid ceasing to function when interface goes down

2009-12-05 Thread Mike Marchywka











> Date: Sun, 6 Dec 2009 00:27:23 +1300
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] squid ceasing to function when interface goes down
>
> ty...@marieval.com wrote:
>> I'm using squid with a parent that's over a VPN. This VPN occasionally
>> times out or goes down; I'm having difficulty deducing which, but
>> theoretically my scripts should reboot it whenever the VPN goes up so I
>> suspect the former.
>>
>> The end result is that my downstream squid occasionally decides to
>> return nothing but 404's ever again until manually restarted. I'm
>> having to watch it like a hawk and can't keep that up all Christmas. If
>> it would just reopen the interface on its own, all would be well. Is
>> there any way to make it deal with this condition gracefully instead of
>> flipping out?
>>
>
> The normal way to handle these things is to use a) reliable network
> connections, or b) multiple interfaces.
>
> What Squid is this?
>
> Things should go back to normal when the interface actually comes back
> up again.

On my debian install, which I am making up as a I go along and so may not
be a robust example of anything, I had a similiar issue and had to use squid -k
to restart. Ihave two interfaces, a LAN card and a wireless card using 
ndiswrapper
with ipmasq. ( Eventually I want to attach all my local computers in this room
to a router and use the wireless interface to reach WAN via wireless router on 
cable modem,
giving me just one wireless connection and some way to monitor the packets from 
all machines and eventually install a wireless modem etc )
In any case, when I boot if I start the browsers before connecting to WLAN it
doesn't work ( duh) but even after connecting I had to restart squid. 



>
> If the interface is up, the parent proxy contactable and Squid still
> sending out the error pages you need to take a good look at those error
> pages and see *why* Squid thinks they are still valid. Probably turn
> negative_ttl down to an HTTP compliant 0 seconds as well.

Do you have a faq page or link on how the peering is handled? I stumbled
onto something regarding the config for it, but wasn't clear quite how all
these things work together in a fault tolerant way. Thanks.


>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
> Current Beta Squid 3.1.0.15
  
_
Windows 7: Unclutter your desktop. Learn more.
http://www.microsoft.com/windows/windows-7/videos-tours.aspx?h=7sec&slideid=1&media=aero-shake-7second&listid=1&stop=1&ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_7secdemo:122009

Re: [squid-users] Squid at 100% CPU with 10 minutes period

2009-12-05 Thread Amos Jeffries

Guy Bashkansky wrote:

Amos,

Where can I find a list of the solved "Squid uses 100% CPU" bugs?  It
would help me figure out which one I may be experiencing.


Sorry for being gruff.

The mailing list queries:
 http://squid.markmail.org/search/?q=squid+uses+100%25+cpu

The remaining open bugs mentioning
http://bugs.squid-cache.org/buglist.cgi?quicksearch=100%25+CPU

The closed ones are hard to find as they have been re-named to say what 
the actual problems was.



Which release and version of squid is this?



When I attach gdb to the running squid process, it does not find
debugging info, despite compiling with -g flag (maybe a gdb setup
problem, nm does show symbols).

When I try to use gprof for profiling (squid compiled with -pg),
squid.gmon contains only the initial configuration functions
profiling.  I stop squid with -k shutdown signal so it exits normally
(which is necessary to produce squid.gmon).


Squid needs to be started with -N to prevent forking a child daemon that 
does all the actual work. The main process can then be traced.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


Re: [squid-users] squid ceasing to function when interface goes down

2009-12-05 Thread Amos Jeffries

ty...@marieval.com wrote:

I'm using squid with a parent that's over a VPN.  This VPN occasionally
times out or goes down;  I'm having difficulty deducing which, but
theoretically my scripts should reboot it whenever the VPN goes up so I
suspect the former.

The end result is that my downstream squid occasionally decides to
return nothing but 404's ever again until manually restarted.  I'm
having to watch it like a hawk and can't keep that up all Christmas.  If
it would just reopen the interface on its own, all would be well.  Is
there any way to make it deal with this condition gracefully instead of
flipping out?



The normal way to handle these things is to use a) reliable network 
connections, or b) multiple interfaces.


What Squid is this?

Things should go back to normal when the interface actually comes back 
up again.


If the interface is up, the parent proxy contactable and Squid still 
sending out the error pages you need to take a good look at those error 
pages and see *why* Squid thinks they are still valid. Probably turn 
negative_ttl down to an HTTP compliant 0 seconds as well.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


Re: [squid-users] Squid delay pool question

2009-12-05 Thread mikewest09

Hi Amos,

Thanks a lot for your detailed explanation, I believe that I had big
misunderstanding of how Classes work.

Having said that, I am not sure if class 4 will be the best one for me
because of two important reasons:

A. All of our users login with the 'same' exact login name/ password as it
is embedded in the desktop application exe file. So what we have here is
same login name/ password and different IP for each user

B. As mentioned before the server have 100 Mbps, my thoughts ('at first')
was that I wanted each user to get 'for example' maximum speed then 'all of
them' will have the same 10 MBps. But I never imagined that the connection
speed 100 or 10 will be (divided) on the number of users logged in, meaning
I couldn't imagine that when I drop the speed to 10 MBPs for user A then all
users will have this speed divided on the number of users logged into the
server (and this is of course due to my network basics ignorance :( )

So my question now is...is it possible in the first place that 'each user'
will get the same 10Mbps despite of the number of users connected to the
server (please excuse my network ignorance here if what I say seems
impossible)?


Now if this will not be possible, is it possible that I simply limit the
usage of the server for browsing html / html files only and exclude any
downloads exe, mp3, ...etc? without putting any limitation on speed? If I
can do this then there might be no need to do the delay pools limitation in
the first place!


Thanks in advance for your time and efforts 


Amos Jeffries-2 wrote:
> 
> mikewest09 wrote:
>> Hi All,
>> 
>> Thanks Amos and Chris for your time and help, I have few more questions
>> so I
>> will appreciate it so much if you can please take a look at them
>> 
>> A. As a start to avoid confusion here are some facts regarding our server
>> and users
>> - All of our users connect to our proxy (SQUID) through an application.
>> This
>> desktop application contains built-in (User name / password) for
>> authenticating each of our users to use our SQUID proxy (installed on our
>> server) AND also contains the IP of our server (which have SQUId
>> installed
>> on it)
>> 
>> - Our users come from many different locations around the world and many
>> of
>> them (most of them actually) don't have 'static Ip's so I can't setup
>> specific range of IP's
>> 
>> - We have more than 100,000 registered users (beside those whom will
>> register in the future) from different locations around the world. Of
>> course
>> not all of them will login together at a specific moment, but many of
>> them
>> can be on the server at the same moment and this was one of the major
>> reasons why we need to regulate SQUID traffic usage
>> 
>> B. Even though I have read info about classes 1,2,3,4 I can't understand
>> up
>> to this moment which will be better for us? So based on the info I just
>> provided in A, do you have any recommendations especially when taking in
>> consideration the amount of users we have
> 
> Depends on what you consider the most stable detail. username or IP.
> 
> Given that you said most users are on dynamic IPs I'd go with username. 
> class 4. With all fields except the user one set to -1/-1
> 
>> 
>> C. delay_parameters 1 -1/-1 1310720/15728640
> 
> isn't -1/-1 is for unlimited access? i mean can you please explain
> this
> rule so I can understand its limitations?
> 
> "unlimited" here means only "unlimited by Squid" the external network 
> settings still apply.
> 
> 
> If you think of it like a geographic hierarchy it makes more sense.
> 
> 
>   class 1: for limiting the entire network HTTP bandwidth.
> 
>   class 2: for limiting individual machines
>   + maybe entire network.
> 
>   class 3: for limiting individual machines
>   + subgroups of machines
>   + maybe entire network.
> 
>   class 4: for limiting on login usernames
>   + maybe individual machines
>   + maybe subgroups of machines
>   + maybe entire network.
> 
>   class 5: for advanced custom controls.
> 
> 
> Lets take a small office LAN as the theoretical situation:
> 
> 
>   With class 1 the office can have a 10Mbps pipe and a policy of 
> allowing only 2Mbps for web browsing traffic "aggregate" (8Mbps 
> dedicated to the phones or external services etc).
> 
> Effects: any single office PC can reach 2Mbps alone, or four can each 
> simultaneously do 256Kbps. The 2Mbps is parceled equally between all.
> 
> 
> With class 2 they can take their 2Mbps traffic policy "aggregate" and 
> add a condition that no one office machine can use more than 256Kbps at 
> once "individual".
> 
> Effects: running 3 or less PC at once will leave some hundreds of Kbpps 
> "wasted" empty, even as each PC can runs at their peak 256Kbps. Five or 
> more PCs will start to degrade each others experience as the 2Mbps is 
> parceled equally between all at 200Kbps each.
> 
> 
> With class 3 they can have two offices n