Re: [squid-users] https://wiki.squid-cache.org provides invalid certificate chain ...

2017-11-17 Thread Kinkie
I have already acted on it but couldn’t communicate in time, sorry. Thanks
for notifying and for looking into it.


On Fri, 17 Nov 2017 at 17:52, Amos Jeffries  wrote:

> On 18/11/17 01:39, Walter H. wrote:
> > for more information see
> > https://www.ssllabs.com/ssltest/analyze.html?d=wiki.squid-cache.org
> >
> > - missing intermediate certificate
> > - ssl3 active, poodle vulnerable ...
> >
>
> None of those issues appear in the test results I get from that URL you
> referenced. SSLv3 is definitely not even supported by our wiki server.
>
> The tester appears to be broken in regards to the chain test. There is
> *no* chain. Our cert is directly signed by the LetsEncrypt CA.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
-- 
@mobile
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-09-20 Thread Kinkie
Hi Fred,
  I assume that by "implicit" you mean "transparent" or
"interception". Short answer, not possible: there is nothing to anchor
cookies to. It could be possible to fake it by having an auxiliary
website doing standard SAML and feeding a database of associations
userid-ip. It will fail to account for cases where multiple users
share the same IP, but that doesn't stop many vendors from caliming
they do "transparent authentication".

On Tue, Sep 20, 2016 at 9:58 AM, FredB  wrote:
> I forgot, if possible a method without active directory
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] GoLang Based delayer

2016-10-25 Thread Kinkie
Hi Eliezer,
   Please list it as "realted software" on the wiki.

On Tue, Oct 25, 2016 at 3:53 PM, Eliezer Croitoru  wrote:
> Inspired by Francesco Chemolli delayer at:
> http://bazaar.launchpad.net/~squid/squid/trunk/view/head:/src/acl/external/d
> elayer/ext_delayer_acl.pl.in
>
> I wrote a delayer in golang:
> http://wiki.squid-cache.org/EliezerCroitoru/GoLangDelayer
>
> The binaries for the helper are at:
> http://ngtech.co.il/squid/helpers/delayer/squid-externalacl_delayer.tar.xz
>
> For windows, linux, BSD, Darwin, arm
>
> Eliezer
>
> 
> Eliezer Croitoru 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Wiki outage - solved

2017-03-13 Thread Kinkie
Hi all,
   due to a hard drive failure on one of the servers we run, the wiki has
been unavailable in the past couple of days.
   The volunteers overseeing the project infrastructure have been able to
restore the service by moving it to a different hardware, and the wiki
should now become progressively available again as DNS records propagate.

   Please join me in thanking the volunteers who help run the squid
infrastructure for donating their time and expertise.

   It is a good momento to remind everyone that the Squid project and the
Squid Software Foundation rely on everyone's effort and on generous
donations by individuals, companies and organizations to continue
supporting squid and accompanying services.

The list of main sponsors is at
http://www.squid-cache.org/Support/sponsors.html

Please refer to http://www.squid-cache.org/Foundation/donate.html if you
wish to donate financial or material resources to the Squid project.

-- 
Francesco Chemolli
Vice President, Squid Software Foundation
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] wiki.squid-cache.org SSL configuration problem ...

2017-08-20 Thread Kinkie
Hi,
  it's been fixed last week. Thanks again for the heads-up!


On Tue, Aug 8, 2017 at 9:00 PM, Francesco Chemolli  wrote:
> On 8 Aug 2017, at 19:06, Walter H.  wrote:
>
> Hello,
>
> the intermediate certificate which is provided doen't go with the end
> entitiy certificate ...
>
> the intermediate that is provided:  Let's Encrypt Authority X1
> the intermediate that should be provided:  Let's Encrypt Authority X3
>
> for more see:
> https://www.ssllabs.com/ssltest/analyze.html?d=wiki.squid-cache.org&s=104.130.201.120
>
>
>
>
> Thanks for letting us know.
> We'll look into it ASAP.
>
> Francesco



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ntlm_auth defaulting to succeed

2015-12-02 Thread Kinkie
Hi,
  you can check the ntlm_fake_auth helper; it'll blandly trust
anything the user says.

On Wed, Dec 2, 2015 at 10:10 PM, Noel Kelly  wrote:
> Hello All
>
> We have been using Squid and ntlm_auth for many years with mainly success.
> However we have always had a few annoyances like continual authentication
> pop-ups if a user has changed their password and not restarted their session
> or, as now, persistent popups which seem related to a browser update (Google
> Chrome is the suspect currently).
>
> It occurred to me that thee days we don't use ntlm_auth to block Internet
> access per se but rather to capture the username to manage access using ACLs
> and the username.
>
> So I was wondering if anyone had any ideas for a Squid config where the
> ntlm_auth helper always succeeded regardless of the password  so they user
> gets waived through and Squid has the username needed to process the ACLs?
>
> Thanks
> Noel
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Disabling IP6 in 3.5.x

2015-12-02 Thread Kinkie
Hi Patrick,
   ./configure --disable-ipv6 

will do the trick.

On Thu, Dec 3, 2015 at 12:43 AM, Patrick Flaherty  wrote:
> Hello,
>
>
>
> Is there a way to disable IP6 in the 3.5.x Squid builds?
>
>
>
> Thanks
>
> Patrick
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with ldap authentication

2015-12-08 Thread Kinkie
On Tue, Dec 8, 2015 at 6:14 PM, Marcio Demetrio Bacci
 wrote:
> Hi
>
> In the Squid Server, I want only basic authentication.
>
> The command:
>
> /usr/lib/squid3/basic_ldap_auth \
>-b cn=users,dc=empresa,dc=com,dc=br \
>-D cn=proxy,cn=users,dc=empresa,dc=com,dc=br -w test_12345 \
>-h 192.168.0.25 -p 389 -s sub -v 3 -f "sAMAccountName=%s"
>
> shows "Success" to authenticate only the users in Organization Unity  (OU)
> "Users", but in my domain I have many OU that has users as TI, Financial,
> Sales..
>
> How I get authenticate the users in others OU?

Since you are using "sub" as search scope, you simply have to move up
one level in the base-DN tree.
Change the parameter
-b cn=users,dc=empresa,dc=com,dc=br
to
-b dc=empresa,dc=com,dc=br

   Francesco Chemolli
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.4, dstdomain

2015-12-10 Thread Kinkie
Hi,
  it works exactly as you expect. "dstdomain addons.mozilla.org" does
not block subdomains.

On Thu, Dec 10, 2015 at 11:02 AM,   wrote:
> 2015/12/10 10:33:49| ERROR: '.addons.mozilla.org' is a subdomain of
> 'addons.mozilla.org'
>
>
> I thought
> addons.mozilla.org  blocks only these hostname
>
> .addons.mozilla.org blocks all the sub-domains, like
> www.addons.mozilla.org etc.addons.mozilla.org
>
>
> Which are the parsing rules of squid 3.4 ?
>
> Does the first case block also the sub-domains ?
>
>
> best regards, Sala
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.4, dstdomain

2015-12-10 Thread Kinkie
On Thu, Dec 10, 2015 at 11:43 AM,   wrote:
> Massimo
>> 2015/12/10 10:33:49| ERROR: '.addons.mozilla.org' is a subdomain of
>> 'addons.mozilla.org'
>
>
> Kinkie :
>>  it works exactly as you expect. "dstdomain addons.mozilla.org" does
>> not block subdomains.
>
>
>
> So why doesn't squid accept both rules ? a parsing bug ?


No bug, it is really intentional: ".addons.mozilla.org" also matches
"addons.mozilla.org" (without the dot). Therefore the latter is
rejected to keep the internal data structures consistent.


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slow App through Proxy

2015-12-18 Thread Kinkie
Hi,
  Do you see anything denied in the squid logs? From what you say it could
be related to a failing attempt to validate a certificate.
On Dec 18, 2015 17:25, "Patrick Flaherty"  wrote:

> Hello,
>
>
>
> We have an app configured to use Squid Proxy (3.5.11). The client machine
> does not have access to the internet except for the whitelisted domains in
> Squid. The app launches painfully slow. It seems to be SSL Certificate
> related. I found a way to fix it but don’t know why it fixes it. Let me
> explain.
>
>
>
> If I go into IE and configure it to use the Squid Proxy and I go to our
> website (SSL Based), the page comes up fine with a nice lock symbol
> signifying SSL. I then turn off the proxy config in IE to stop using the
> Squid Proxy. I relaunch our app and it launches fast forever more!!! I
> thought that it might be downloading a certificate but I look at all the
> Windows certificates either through IE or CertMgr.msc and it appears that
> no new certificates are in there after this exercise. Something in the
> Windows config changed and I don’t know what it is. I would love to know
> because I would like to see if there is an easier method to fix this as
> opposed to the one I just outlined.
>
>
>
> Any input would be greatly appreciated.
>
>
>
> Patrick
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is not worked in OpenVZ VPS.

2015-12-30 Thread Kinkie
Well, the IPv6 address could be telling. Maybe OpenVZ is setting up a
V6 network but has no route out of it.
Can you try accessing a known V4 and a known V6 address? It could help
you understand if the issue is there. In that case, you need to fix
the issue at the OpenVZ level.


On Wed, Dec 30, 2015 at 3:14 PM, Billy.Zheng  wrote:
> Thanks for you reply.
>
> The failed message is: `Connection to  failed',  is a IPV6
> address somehow.
>
> I found i just could't access part of website, not all.
>
> so, I thought this is not Squid problem, maybe china GFW prevent this,
> I doubt OpenVZ provider's machine room exist some problem.
>
> Thanks.
>
> Francesco Chemolli writes:
>
>>> On 30 Dec 2015, at 11:39, Billy.Zheng(zw963)  wrote:
>>>
>>> Hi, I have two VPS in same location(HONG KONG)
>>>
>>> the two VPS is blongs to two service provider, one OpenVZ, one XEN.
>>>
>>> I choice with same version CentOS(6.7), and with same config script for
>>> a FORWARD proxy to access free world.
>>>
>>> XEN always worked for me, but OpenVZ is not.
>>>
>>
>>> the second logs is so strange,  www.vpsnine.com is my OpenVZ VPS
>>> provider domain name, I never access it from my local browser.
>>> and not like another XEN VPS, those log output is very very slow.
>>>
>>> Could you give me some clue for resolve this? Thanks.
>>
>> when you try accessing some destination with the proxy that is not working, 
>> what does the error page say?
>>
>>   Kinkie
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> --
> Geek, Rubyist, Emacser
> Homepage: http://zw963.github.io
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-4.0.4 beta is available

2016-01-10 Thread Kinkie
Hi eliezer,
   This looks like a broken or not completely installed libstdc++.
Could you check that all packages mentioned at
http://wiki.squid-cache.org/BuildFarm/CentosInstall are installed on
your build system?

On Sun, Jan 10, 2016 at 6:02 PM, Eliezer Croitoru  wrote:
> I am having trouble building 4.0.4 on OpenSUSE leap.
> I have tried both manually and using the rpm build tools.
> The error in the rpmbuild logs at:
> http://ngtech.co.il/repo/opensuse/leap/logs/build5-4.0.4.log
> and the build log of the manual compilation are at:
> http://ngtech.co.il/repo/opensuse/leap/logs/conf1-4.0.4.log
> http://ngtech.co.il/repo/opensuse/leap/logs/build1-4.0.4.log
>
> The error output:
> make[3]: Entering directory
> '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers/basic_auth/NCSA'
> depbase=`echo basic_ncsa_auth.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\
> /usr/local/bin/g++ -DHAVE_CONFIG_H   -I../../.. -I../../../include
> -I../../../lib -I../../../src -I../../../include-I.  -Wall
> -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror
> -Wno-deprecated-register -pipe -D_REENTRANT -g -O2 -march=native -std=c++11
> -MT basic_ncsa_auth.o -MD -MP -MF $depbase.Tpo -c -o basic_ncsa_auth.o
> basic_ncsa_auth.cc &&\
> mv -f $depbase.Tpo $depbase.Po
> basic_ncsa_auth.cc: In function ‘int main(int, char**)’:
> basic_ncsa_auth.cc:104:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("");
>  ^
> basic_ncsa_auth.cc:104:42: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("");
>   ^
> basic_ncsa_auth.cc:108:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("");
>  ^
> basic_ncsa_auth.cc:108:42: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("");
>   ^
> basic_ncsa_auth.cc:115:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("No such user");
>  ^
> basic_ncsa_auth.cc:115:54: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("No such user");
>   ^
> basic_ncsa_auth.cc:128:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:128:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:133:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:133:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:138:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("Password too long. Only 8 characters accepted.");
>  ^
> basic_ncsa_auth.cc:138:88: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("Password too long. Only 8 characters accepted.");
>
>  ^
> basic_ncsa_auth.cc:144:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:144:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:148:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:148:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:151:9: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("Wrong password");
>  ^
> basic_ncsa_auth.cc:151:52: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("Wrong password");
> ^
> At global scope:
> cc1plus: error: unrecognized command line option "-Wno-deprecated-register"
> [-Werror]
> cc1plus: all warnings being treated as errors
> Makefile:814: recipe for target 'basic_ncsa_auth.o' failed
> make[3]: *** [basic_ncsa_auth.o] Error 1
> make[3]: Leaving directory
> '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers/basic_auth/NCSA'
> Makefile:517: recipe for target 'all-recursive' failed
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory
> '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers/basic_auth'
> Makefile:517: recipe for target 'all-recursive' failed
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers'
> Makefile:569: recipe for target 'all-recursive' failed
> make: *** [all-recursive] Error 1
> ##END OF OUTPUT
>
> I have tried to understand the issue and I found out that it might be
> because of the usage of gcc and not g++ and I have tried to use CXX=g++ in
> order to test the issue but it doesn't help.
> On the same machine I have built 3.5.13 without any issues.
>
> If I can add more information on the build node just let me know.
>
> Thanks,
> Eliezer
>
> On 10/01/2016 08:15, Amos Jeffries wrote:
>>
>> The Squid HTT

Re: [squid-users] Squid-4.0.4 on FreeBSD

2016-01-13 Thread Kinkie
Hi,
   I see that there is no -I/usr/local/include option to the compiler.

Add that as a CPPLAGS when calling configure
(e.g.
CPPFLAGS=-I/usr/local/include ./configure
)
this should fix the build for you.


On Wed, Jan 13, 2016 at 4:25 PM, Odhiambo Washington  wrote:
> I am trying to compile on FreeBSD 10.1-RELEASE-amd64
>
>
> 
> /bin/sh ../libtool  --tag=CC   --mode=compile clang -DHAVE_CONFIG_H   -I..
> -I../include -I../lib -I../src -I../include  -I/usr/include  -I/usr/include
> -I../libltdl -I/usr/include -I/usr/local/include/libxml2  -Werror
> -Qunused-arguments  -D_REENTRANT  -MT md5.lo -MD -MP -MF $depbase.Tpo -c -o
> md5.lo md5.c &&\
> mv -f $depbase.Tpo $depbase.Plo
> libtool: compile:  clang -DHAVE_CONFIG_H -I.. -I../include -I../lib -I../src
> -I../include -I/usr/include -I/usr/include -I../libltdl -I/usr/include
> -I/usr/local/include/libxml2 -Werror -Qunused-arguments -D_REENTRANT -MT
> md5.lo -MD -MP -MF .deps/md5.Tpo -c md5.c  -fPIC -DPIC -o .libs/md5.o
> In file included from md5.c:41:
> ../include/md5.h:13:10: fatal error: 'nettle/md5.h' file not found
> #include 
>  ^
> 1 error generated.
> Makefile:956: recipe for target 'md5.lo' failed
> gmake[2]: *** [md5.lo] Error 1
> gmake[2]: Leaving directory '/usr/home/wash/ILI/Squid/4.x/squid-4.0.4/lib'
> Makefile:1001: recipe for target 'all-recursive' failed
> gmake[1]: *** [all-recursive] Error 1
> gmake[1]: Leaving directory '/usr/home/wash/ILI/Squid/4.x/squid-4.0.4/lib'
> Makefile:579: recipe for target 'all-recursive' failed
> gmake: *** [all-recursive] Error 1
>
> 
>
>
>
> But the file is there ...
>
>
> wash@mail:~/ILI/Squid/4.x/squid-4.0.4$ ls -al
> /usr/local/include/nettle/md5.h
> -rw-r--r--  1 root  wheel  2023 Jan  7  2015 /usr/local/include/nettle/md5.h
>
>
> --
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft."
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Compiling Squid for Android ARM v7 and higher

2016-01-29 Thread Kinkie
On Fri, Jan 29, 2016 at 10:08 AM, Hans Dampf  wrote:
> Hello,
>
> I was trying to compile latest squid source for my android smartphone. I
> tried almost everything but I cant successfully compile it.
> Can you please help me with a tutorial for dummies  or even send me a
> precompiled binary ?
>
> Thanks in advance


Hi,
   there is nothing preventing Squid to build on ARM that we squid
developers know of. Android is a different story, we have not ported
to that platform and we do not know how we could build for it.
  If you can share a bit more about what is blocking you, maybe some
other user can help.


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid None Aborted problem

2016-02-08 Thread Kinkie
Hi,
  I can't find any reference about this problem in earlier mails, I
must have missed it.
Can you share more context?

On Mon, Feb 8, 2016 at 6:18 PM, secoonder  wrote:
> please help me
> i dont want to return 12.04 :(
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-None-Aborted-problem-tp4675901p4675913.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Kinkie
Hi,
  it's all in the logs you posted:

ipcCreate: fork: (12) Cannot allocate memory
WARNING: Cannot run '/lib/squid3/ssl_crtd' process.
...
FATAL: Failed to create unlinkd subprocess

You've run of system memory during startup.


On Tue, Feb 9, 2016 at 4:47 PM, Panda Admin  wrote:
> Hello,
>
> I am running squid 3.5.13 and it crashes with these errors:
>
> 2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
> 2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
> x86_64-pc-linux-gnu...
> 2016/02/09 15:43:24 kid1| Service Name: squid
> 2016/02/09 15:43:24 kid1| Process ID 7279
> 2016/02/09 15:43:24 kid1| Process Roles: worker
> 2016/02/09 15:43:24 kid1| With 1024 file descriptors available
> 2016/02/09 15:43:24 kid1| Initializing IP Cache...
> 2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
> 2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
> 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from /etc/resolv.conf
> 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from /etc/resolv.conf
> 2016/02/09 15:43:24 kid1| Adding domain nuspire.com from /etc/resolv.conf
> 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
> processes
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
> processes
> 2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
> needed.
> 2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> FATAL: Failed to create unlinkd subprocess
> Squid Cache (Version 3.5.13): Terminated abnormally.
> CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
> Maximum Resident Size: 4019840 KB
> Page faults with physical i/o: 0
>
>
> Anybody have an idea why?
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Kinkie
If you are swapping performance will suffer terribly. How large are these
files and how much ram do youbhave?
On Feb 9, 2016 5:17 PM, "Panda Admin"  wrote:

> Adding a swap directory fixed it for now.  I think it's because my ACL
> files are so large.
>
> On Tue, Feb 9, 2016 at 11:00 AM, Panda Admin 
> wrote:
>
>> I see that, but that's not possible. I still have system memory available.
>> I just did a top while running squid, never went over 30% memory usage.
>> It maxed out the CPU but not the memory. So, yeah...still confused.
>>
>> On Tue, Feb 9, 2016 at 10:55 AM, Kinkie  wrote:
>>
>>> Hi,
>>>   it's all in the logs you posted:
>>>
>>> ipcCreate: fork: (12) Cannot allocate memory
>>> WARNING: Cannot run '/lib/squid3/ssl_crtd' process.
>>> ...
>>> FATAL: Failed to create unlinkd subprocess
>>>
>>> You've run of system memory during startup.
>>>
>>>
>>> On Tue, Feb 9, 2016 at 4:47 PM, Panda Admin 
>>> wrote:
>>> > Hello,
>>> >
>>> > I am running squid 3.5.13 and it crashes with these errors:
>>> >
>>> > 2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
>>> > 2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
>>> > x86_64-pc-linux-gnu...
>>> > 2016/02/09 15:43:24 kid1| Service Name: squid
>>> > 2016/02/09 15:43:24 kid1| Process ID 7279
>>> > 2016/02/09 15:43:24 kid1| Process Roles: worker
>>> > 2016/02/09 15:43:24 kid1| With 1024 file descriptors available
>>> > 2016/02/09 15:43:24 kid1| Initializing IP Cache...
>>> > 2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
>>> > 2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
>>> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from
>>> /etc/resolv.conf
>>> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from
>>> /etc/resolv.conf
>>> > 2016/02/09 15:43:24 kid1| Adding domain nuspire.com from
>>> /etc/resolv.conf
>>> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
>>> > processes
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
>>> > processes
>>> > 2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
>>> > needed.
>>> > 2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > FATAL: Failed to create unlinkd subprocess
>>> > Squid Cache (Version 3.5.13): Terminated abnormally.
>>> > CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
>>> > Maximum Resident Size: 4019840 KB
>>> > Page faults with physical i/o: 0
>>> >
>>> >
>>> > Anybody have an idea why?
>>> >
>>> > ___
>>> > squid-users mailing list
>>> > squid-users@lists.squid-cache.org
>>> > http://lists.squid-cache.org/listinfo/squid-users
>>> >
>>>
>>>
>>>
>>> --
>>> Francesco
>>>
>>
>>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard: redirect to squid-internal URLs no longer working with 3.5?

2016-03-14 Thread Kinkie
Hi,
  .. has it ever? internal:// doesn't seem like a recognized protocol to me.

On Mon, Mar 14, 2016 at 10:29 AM, Silamael  wrote:
> Hi there,
>
> I'm updating from 3.4. to 3.5 and noticed that the following
> redirect-URL from squidGuard no longer works:
> internal://squid-internal-static/error-access-denied
> As far as I can see, Squid no parses the rewrite answers through a
> standard URL parser which results in the port being 0.
> But even by specifying an explicit ports I'm not able to redirect to an
> squid internal URL.
> Am I missing here something or is redirecting to the internal error
> pages intentionally no longer supported by Squid?
>
> Greetings,
> Matthias
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] We have a big problems with Squid 3.3.8, it's a bug ?

2016-03-30 Thread Kinkie
Are you using BASIC, ntlm or kerberos?
Do you know that user's password in order to run some tests?
Do you have some other proxy or box where you can run some tests?
AD is a complex system, so the first thing to do is to understand I'd the
problem is caused by ad, by the system, by something related to the user or
to the author helper or to squid.
On Mar 30, 2016 9:50 AM, "Olivier CALVANO"  wrote:

> Anyone know this problems ?
>
>
> 2016-03-29 18:22 GMT+02:00 Olivier CALVANO :
>
>> Hi
>>
>> we use on a new server Squid 3.3.8 on CentOS 7 with a Active Directory
>> Authentification (tested in negotiate_wrapper but same
>> problems with ntlm_auth) .
>>
>> That's work's very good a time but without reason, a limited user can't
>> access to internet and i don't know why.
>>
>> In the logs, we have:
>>
>> 1459266547.967 1200888 172.16.6.39 NONE_ABORTED/000 0 GET
>> http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?
>> olivier HIER_NONE/- -
>> 1459266567.771 3538111 172.16.6.14 NONE_ABORTED/000 0 GET
>> http://yahoo.fr/ olivier HIER_NONE/- -
>> 1459267856.877  30609 172.16.6.39 NONE_ABORTED/000 0 GET
>> http://officecdn.microsoft.com/Office/Data/v32.cab olivier HIER_NONE/- -
>> 1459267917.860  60713 172.16.6.39 NONE_ABORTED/000 0 HEAD
>> http://officecdn.microsoft.com/Office/Data/v32.cab olivier HIER_NONE/- -
>>
>>
>> I don't know why but all logs have "NONE_ABORTED/000"
>> anyone know this errors ?
>>
>>
>> If, on the same PC, i change the username, that's work ! reconnect with
>> the old username and the problems start
>>
>> regards
>> Olivier
>>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.1 install

2015-01-18 Thread Kinkie
Hi Matt,
  could you share the last 20 lines of the build output, and what
environment you are building on?
The developers regularly test builds on 15 or so different OSes (see
http://build.squid-cache.org) but yours may be different.
You may also want to check the squid wiki for details on how to set up
a build environment.

On Sun, Jan 18, 2015 at 7:51 PM, Matt Bowman  wrote:
> Hey guys,
>
> I just tried compiling the latest version of squid 3.5.1 with OpenSSL enabled 
> and am receiving compile errors.  Has anyone else run into this problem?
>
> Thanks,
>
> Matt
>
> Sent from my iPhone
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: Squid 3.5.2 Compile Error

2015-03-07 Thread Kinkie
Could you please attach your config.log file?

Thanks

On Sat, Mar 7, 2015 at 5:27 AM, Michel Peterson
 wrote:
> Hi friends,
>
> I'm trying to compile squid 3.5.2 on debian wheezy and I am getting
> the following error after running the command "make all":
>
> Making all in compat
> make[1]: Entrando no diretório `/root/squid-3.5.2/compat'
> depbase=`echo assert.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\
> /bin/bash ../libtool  --tag=CXX   --mode=compile g++
> -DHAVE_CONFIG_H   -I.. -I../include -I../lib -I../src -I../include
> -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror
> -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64  -g
> -O2 -march=native -std=c++11 -MT assert.lo -MD -MP -MF $depbase.Tpo -c
> -o assert.lo assert.cc &&\
> mv -f $depbase.Tpo $depbase.Plo
> libtool: compile:  g++ -DHAVE_CONFIG_H -I.. -I../include -I../lib
> -I../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments
> -Wshadow -Werror -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE
> -D_FILE_OFFSET_BITS=64 -g -O2 -march=native -std=c++11 -MT assert.lo
> -MD -MP -MF .deps/assert.Tpo -c assert.cc  -fPIC -DPIC -o
> .libs/assert.o
> In file included from ../include/squid.h:43:0,
>  from assert.cc:9:
> ../compat/compat.h:49:57: error: operator '&&' has no right operand
> make[1]: ** [assert.lo] Erro 1
> make[1]: Saindo do diretório `/root/squid-3.5.2/compat'
> make: ** [all-recursive] Erro 1
>
>
>
> Help me plz.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: Squid 3.5.2 Compile Error

2015-03-07 Thread Kinkie
Yes, and I thought that as well. My reaction would be to make sure
that the secondo variable is guaranteed defined as 0.

On Sun, Mar 8, 2015 at 6:26 AM, Amos Jeffries  wrote:
> On 8/03/2015 7:33 a.m., Kinkie wrote:
>> Could you please attach your config.log file?
>
> FYI: This appears to be the precompiler not treating undefined macros as
> 0/false. Which is kind of weird in Wheezy since that compiler version
> was in use during the test development.
>
> Amos
>
>>
>> Thanks
>>
>> On Sat, Mar 7, 2015 at 5:27 AM, Michel Peterson wrote:
>>> Hi friends,
>>>
>>> I'm trying to compile squid 3.5.2 on debian wheezy and I am getting
>>> the following error after running the command "make all":
>>>
>>> Making all in compat
>>> make[1]: Entrando no diretório `/root/squid-3.5.2/compat'
>>> depbase=`echo assert.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\
>>> /bin/bash ../libtool  --tag=CXX   --mode=compile g++
>>> -DHAVE_CONFIG_H   -I.. -I../include -I../lib -I../src -I../include
>>> -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror
>>> -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64  -g
>>> -O2 -march=native -std=c++11 -MT assert.lo -MD -MP -MF $depbase.Tpo -c
>>> -o assert.lo assert.cc &&\
>>> mv -f $depbase.Tpo $depbase.Plo
>>> libtool: compile:  g++ -DHAVE_CONFIG_H -I.. -I../include -I../lib
>>> -I../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments
>>> -Wshadow -Werror -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE
>>> -D_FILE_OFFSET_BITS=64 -g -O2 -march=native -std=c++11 -MT assert.lo
>>> -MD -MP -MF .deps/assert.Tpo -c assert.cc  -fPIC -DPIC -o
>>> .libs/assert.o
>>> In file included from ../include/squid.h:43:0,
>>>  from assert.cc:9:
>>> ../compat/compat.h:49:57: error: operator '&&' has no right operand
>>> make[1]: ** [assert.lo] Erro 1
>>> make[1]: Saindo do diretório `/root/squid-3.5.2/compat'
>>> make: ** [all-recursive] Erro 1
>>>
>>>
>>>
>>> Help me plz.
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange message when doing a squid -k parse or reconfigure

2015-04-07 Thread Kinkie
Hi,
  I've searched for these strings in squid, couldn't find them.
Maybe this is emitted by some library?

On Tue, Apr 7, 2015 at 5:16 PM, dweimer  wrote:
> My Squid Process seems to be working fine, but I noticed an unusual message
> when testing the squid configuration
>
> squid: environment corrupt; missing value for https_pr
>
> Any Ideas? Its a forward only proxy not doing reverse proxy or anything. Its
> running on FreeBSD 10.1-RELEASE-p8, installed from ports Squid version is
> 3.4.12. I don't have any problems accessing http or https sites through it.
>
> --
> Thanks,
>Dean E. Weimer
>http://www.dweimer.net/
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Client delay pools ...doesn't work

2015-04-12 Thread Kinkie
Hi Fiorenza,
  does your browser display the same error when you remove that config
line and reconfigure squid?

On Fri, Apr 10, 2015 at 3:51 PM, Fiorenza Meini  wrote:
> Hi,
> I'm testing on a 3.4 squid release the client_delay_poolfunctionality.
> It seems that isn't working: on my browser I receive the error that proxy
> isn't reachable, and in log file I can't see nothing useful.
>
> Has anyone configured this functionality successfully ?
>
> Regards
>
> Fiorenza Meini
> --
> Spazio Web S.r.l.
> V. Dante, 10
> 13900 Biella
> Tel.: +39 015 2431982
> Fax.: +39 015 2522600
> Numero d'Iscrizione al Registro Imprese presso CCIAA Biella, Cod.Fisc.e
> P.Iva: 02414430021
> Iscriz. REA: BI - 188936 Cap. Soc.: EURO. 30.000 i.v.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to achieve squid to handle 2000 concurrent connections?

2015-04-19 Thread Kinkie
What does cachemgr say? In particular what's the contents of the
"general information" and the "filedescriptor allocation" pages?

On Sun, Apr 19, 2015 at 11:58 AM, Abdelouahed Haitoute
 wrote:
> Hello,
>
> I've got the following setup, each application on its own virtual machine:
>
> Client (sends http-requests to proxy)--> Squid (sends http-requests to apache
> based on destination IP and round robin to multiple apache machines) -->
> Apache (setting up a two way ssl to the requested server) --> HTTPS-server
>
> This setup works great, and I have the Apache and the HTTPS-server its
> performance tuned. Both can handle 2000 concurrent connections of file sizes
> up to 10MB.
>
> Unfortunately I haven't been successful with the Squid-server. After a while
> I'm getting the following error messages in the log:
> 1429432828.200  62854 10.10.7.16 TCP_MISS_ABORTED/000 0 GET
> http://https.example.com/index.html - ROUNDROBIN_PARENT/192.168.0.20 -
>
> The Squid virtual machine contains the following:
> CentOS 7.1 with latest updates
> Squid Cache: Version 3.3.8
> CPU: Intel Xeon E312xx (Sandy Bridge) - 1799.998 MHz (4 cores)
> Memory: 4096 MiB
> Harddisk: 10 GiB, SCSI, raw, cache none
>
> When I execute a performance test with 2000 concurrent connections handling
> a file size of 10KB on each request.
> # ab -n 1 -c 2000 -X 10.10.7.15:3128 http://https.example.com/index.html
> This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking https.rinis.nl [through 10.10.7.15:3128] (be patient)
> Completed 1000 requests
> Completed 2000 requests
> Completed 3000 requests
> Completed 4000 requests
> Completed 5000 requests
> Completed 6000 requests
> Completed 7000 requests
> Completed 8000 requests
> apr_pollset_poll: The timeout specified has expired (70007)
> Total of 8610 requests completed
>
> I have the command "vmstat 5" running on the squid server:
> procs ---memory-- ---swap-- -io -system--
> --cpu-
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
> wa st
>  2  0  0 3823916764 12499200   51926  237  503  2  3 92
> 3  0
>  0  0  0 3823744764 12507200 0 0   44   79  0  0 100
> 0  0
>  0  0  0 3823776764 12504400 0 2   39   70  0  0 100
> 0  0
>  0  0  0 3729540764 13911600 1 0 2145  257  1  2 97
> 0  0
>  0  0  0 3728432764 13988800 046 2297  594  1  1 97
> 0  0
>  0  0  0 3726484764 14089200 039 2869  581  2  1 97
> 0  0
>  0  0  0 3725528764 14137600 0 0 2843  648  2  2 96
> 0  0
>  0  0  0 3724980764 14200800 069 2824  529  2  1 97
> 0  0
>  0  0  0 3724584764 14254000 0 0 2742  472  2  1 97
> 0  0
>  0  0  0 3723696764 14300400 0 0 2511  577  2  1 97
> 0  0
>  0  0  0 3722840764 14320000 012  884  228  1  1 99
> 0  0
>  0  0  0 3722704764 14290000 0 0  136  127  0  0 100
> 0  0
>  0  0  0 3722504764 14274400 0 0   40   70  0  0 100
> 0  0
>  0  0  0 3722456764 14278400 0   114   37   68  0  0 100
> 0  0
>  0  0  0 3722208764 14283200 0 0   41   68  0  0 100
> 0  0
>  0  0  0 3722480764 14228000 0 0  179   82  0  0 100
> 0  0
>  0  0  0 3722544764 14214000 0 7   41   75  0  0 100
> 0  0
> procs ---memory-- ---swap-- -io -system--
> --cpu-
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
> wa st
>  1  0  0 3722544764 14213600 0 0   36   67  0  0 100
> 0  0
>  0  0  0 3722996764 14155200 0 0   42   75  0  0 100
> 0  0
>  0  0  0 3722980764 14156800 0 0   37   68  0  0 100
> 0  0
>  0  0  0 3723028764 14152400 0 0   36   66  0  0 100
> 0  0
>  0  0  0 3736816764 13035200 0 0  809  114  0  0 99
> 0  0
>  0  0  0 3737544764 13026800 041   42   74  0  0 100
> 0  0
>
> It looks like the hardware has enough resources during the benchmark test.
>
> I've got the following squid.conf running:
> cache_peer 192.168.0.18 parent 3128 0 round-robin no-query no-digest
> cache_peer 192.168.0.20 parent 3128 0 round-robin no-query no-digest
>
> acl development_net dst 192.168.0.0/24
> cache_peer_access 192.168.0.18 allow development_net
> cache_peer_access 192.168.0.20 allow development_net
>
> never_direct allow all
> cache deny all
>
> maximum_object_size_in_memory 16 MB
> cache_mem 2048 MB
>
> The squid must not cache at all.
>
> Any help is welcome.
>
> Abdelouahed
>
> ___

Re: [squid-users] Squid Bugzilla is down

2015-04-30 Thread Kinkie
Hi,
  sorry, we had a severe OOM on the main squid server.
Now rebooted and hopefully better plugged. We will see about upgrading
the server as soon as possible.

On Thu, Apr 30, 2015 at 12:10 PM, Yuri Voinov  wrote:
> Amos,
>
> what's up with bugzilla? It down and not available.
>
> WBR, Yuri
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Bugzilla is down

2015-04-30 Thread Kinkie
Should be fine now.
Thanks for notifying of the issue.

On Thu, Apr 30, 2015 at 7:42 PM, Yuri Voinov  wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Now server produces 500 error.
>
> 30.04.15 23:39, Kinkie ?:
>> Hi,
>>   sorry, we had a severe OOM on the main squid server.
>> Now rebooted and hopefully better plugged. We will see about upgrading
>> the server as soon as possible.
>>
>> On Thu, Apr 30, 2015 at 12:10 PM, Yuri Voinov  wrote:
>>> Amos,
>>>
>>> what's up with bugzilla? It down and not available.
>>>
>>> WBR, Yuri
>>>
>>>
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVQml8AAoJENNXIZxhPexGfvEIAKHXVkgDYuOob2YgmFB0AP1h
> h3jgjoNkGbxRkV+BCjAYpn/qSDHGHMI54T6d9r0If3oFrDLccWM3Bq+eGQK1smTj
> ZbRcvt37QtjYcuRMXqU42m/mQDZ5UvEOireGwn9DR9TKsbHHn0EKynDdsFaLK3A/
> 8AbSoRIxMLH9vPbhBGd0O5gFsgBit68v/8nt3P+GMbHhS/WIamG0FvlAQDqEnIir
> K2avn4C/PL4ZcKErKtCPMRYAl9KyO9HdAhXMKKAq3k0iKCknMd+NTKUtXBmDyH5Z
> F+bhdddG81OioGJ1LwMX8xIM4CT6JHEyO+dMa1n5/eydiWg6Fi0qaUYvFZytnLQ=
> =iwk9
> -END PGP SIGNATURE-
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can files be placed on a RAID volume now?

2015-06-07 Thread Kinkie
Yes, it still applies; it is a FAQ.
See http://wiki.squid-cache.org/SquidFaq/RAID


On Sun, Jun 7, 2015 at 12:35 PM, TarotApprentice
 wrote:
> I recall from Squid 2.7 days the recommendation not to put the cache files on 
> a RAID volume under Windows. Does that restriction still apply?
>
> Related does the windows version use the different file system types (ie 
> rock, aufs, ufs) for the disk cache or is it irrelevant under windows.
>
> Cheers,
> MarkJ
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] mimeInit: /etc/squid/mime.conf: (13) Permission denied

2015-06-12 Thread Kinkie
Hi all,
  file system corruption at times manifests itself as permission problems.
Can you fsck?

On Fri, Jun 12, 2015 at 12:00 PM, yashvinder hooda
 wrote:
> Hi Amos
>
> Squid pkg version is 3.5.2 and ‎it's running on openwrt.
> In logs I can see  one more permission related error ‎ and that is:
> ParseEtcHosts: /etc/hosts ()Permission denied
>
> Regards,
> Yashvinder
>   Original Message
> From: Amos Jeffries
> Sent: Friday 12 June 2015 3:18 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] mimeInit: /etc/squid/mime.conf: (13) Permission 
> denied
>
> On 12/06/2015 9:35 p.m., yashvinder hooda wrote:
>> ‎Hi,
>> Fred
>>
>> Squid directory permission is 644 with nobody:root and same is for mime.conf 
>> and squid.conf
>>
>> Regards,
>> Yashvinder
>
> Okay. Wierd. Its not even like Squid is trying to open for write or
> anything fancy. Its just reading.
>
> Are you using the latest available Squid version?
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] I was wondering if someone has ever tried to use a SAN\NAS as the cache backend?

2015-06-25 Thread Kinkie
Hi Eliezer,
  it depends.
The problem is not the NAS/SAN per se, but the disk access patterns.
Squid's disk access pattern, regardless the technology, is always
randomly-timed 4kb writes (in case of Rock, they are sequential, in
*ufs scattered).
If the NAS/SAN uses a write-back policy, it is possible that by the
time it decides to flush to disk, squid has written to a full stripe
and everyone will be happy (except for RAID5 or 10); this is
relatively likely in case of Rock, unlikely in case of *ufs.
But every time a write is not stripe-aligned, the NAS/SAN will have to
read and write N stripes (N >= 2 depending on the type of RAID). This
is a bit  suboptimal for the NAS/SAN in case of Rock, but it will
likely hurt the SAN/NAS performance in case of *ufs.

In case the SAN/NAS policy is not write-back but write-through, any
option (including rock) will adversely affect the SAN/NAS performance.

On Thu, Jun 25, 2015 at 2:09 PM, Eliezer Croitoru  wrote:
> Hello list,
>
> I was wondering if someone has ever tried to use a SAN\NAS as the cache
> backend?
> Since rock cache type\dir changed the file handling way from "lots of files
> db" into a single(and one more) cache db There is surly a way to benefit
> from nas and SAN.
>
> If someone have used san(ISCSI) or nas(NFS) for any of the cahed dirs type I
> would like to run some tests and you can help me not repeat old tests
> resolts.
>
> Thanks,
> Eliezer
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] a problem about reverse proxy and $.ajax

2015-07-17 Thread Kinkie
Hi,
  What is in the squid access.log for that request?

On Thu, Jul 16, 2015 at 5:27 PM, johnzeng  wrote:

>
> Hello dear All :
>
> i am writing testing download rate program recently ,
>
> and i hope use reverse proxy ( squid 3.5.x) too ,
>
> but if we use reverse proxy and i found Ajax won[t succeed to download
>
> , and success: function(html,textStatus) -- return value ( html ) is blank
> .
>
>
> if possible , please give me some advisement .
>
>
>
> squid config
>
> http_port 4432 |accel| vport defaultsite=10.10.130.91
> |cache_peer 127.0.0.1 parent 80 0 default name=ubuntu-lmr  |
>
>
>
> Ajax config
>
> $.ajax({
> type: "GET",
> url: load_urlv,
> cache: false,
> mimeType: 'text/plain; charset=x-user-defined',
>
> beforeSend: function(){
> $('#time0').html('download file...').show();
> },
>
> error: function(){
> alert('Error loading XML document');
> },
> success: function(html,textStatus)
> {
>
> ...
>
> }
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] forward proxy - many users with one login/passwd.

2015-07-31 Thread Kinkie
On Thu, Jul 30, 2015 at 11:57 PM, Berkes, David 
wrote:

>
> Just a basic question.  I have a 3.5.0.4 forward proxy setup with basic
> authentication for my MDM proxy (iphones).  All iphones are set with the
> global proxy and identical user-name/password.  They will be on an LTE
> network and will be switching IP's often.  The forward proxy
> user-name/password will always be the same from each iphone. I have read
> several things about (max_user_ip, authenticate_ip_ttl) and concerned with
> the setup.  I essentially don’t want to limit any number of source
> connections using the same credentials.   Please advise of any pitfalls
> and/or settings for many users switching IP's frequent, using the same
> login/passwd.
>
>
Hi,
  there's none that I can think of.

-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mac OS X Updates

2015-08-24 Thread Kinkie
Hi John,
  according to the article you link to, it's not possible to cache these
updates: Apple puts some effort as a conscious choice to make it so.

  Updates for older versions of MacOS may be over HTTP, newer ones are over
HTTPs over port 443 and and dynamically-generated ports. HTTP could be
cached, https cannot without ssl-bump/peek-n-splice (SSL man-in-the-middle).
  The wording of the article seems to suggest that the list of trusted
issuers of certificates for the https service is not the same as the
system's CA root certificate store but is probably locked to Apple's. This
means that also SSL MITM is not possible, by design.


On Wed, Aug 19, 2015 at 9:20 PM, John Pearson 
wrote:

> Anyone have Mac OS X update caching working ? Without doing a SSL bump. I
> think they are hosted through https (
> https://support.apple.com/en-us/HT202943 )
>
> Thanks!
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 16G Virtual Mem

2015-08-28 Thread Kinkie
Hi,
   yes, it could be, depending on your configuration.
Please see http://wiki.squid-cache.org/SquidFaq/SquidMemory


On Fri, Aug 28, 2015 at 4:32 PM, Jorgeley Junior  wrote:

> Guys, is this really normal???
>
> ​
>
> --
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-Availability in Squid

2015-08-29 Thread Kinkie
Hi,
  please see http://wiki.squid-cache.org/Features/MonitorUrl.
It's available in squid 2.6 , and is one of the last few features who
haven't yet made it to Squid 3.X. If anyone is interested, code and
sponsorships are always welcome :)

On Thu, Aug 27, 2015 at 12:10 PM, Imaginovskiy  wrote:

> Hi all,
>
> Bit of a strange one but I'm wondering if it's possible to have squid
> redirect a site to a secondary backend server if the primary is down. Have
> been looking into this but haven't seen much similar to this. Currently I
> have my setup along the lines of this;
>
> Client -> Squid -> Backend1
>
> but in the event that Backend1 is down, the following should be done;
>
> Client -> Squid -> Backend2
>
> Is squid capable of monitoring connections to peer or redirecting based on
> an ACL looking for some HTTP error code?
>
> Thanks.
>
>
>
>
>
> --
> View this message in context:
> http://squid-web-proxy-cache.1019090.n4.nabble.com/High-Availability-in-Squid-tp4672899.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Volunteers sought

2015-09-01 Thread Kinkie
Hi all,
   I am currently working on some performance improvements for the
next version of squid; I need some help from volunteers to verify the
benefit given by a memory pools feature in real-life scenarios, to
better understand how to develop it further.
I need the help of someone who has a somewhat busy deployment, who's
building their own software packages and who's willing to run a
patched version of a reasonably recent squid (it's a 1-line patch with
no user-visible behavior changes) for a few hours, and report whether
there are any observable changes in performance against the
non-patched version.

If you are interested, please get in touch with me for the details.

Thanks!

-- 
Kinkie
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] deny rep_mime_type

2015-10-21 Thread Kinkie
Hi,
  I suspect (unverified) that

acl dom dstdomain .example.com
acl type rep_mime_type base/type
http_reply_access deny dom type
http_reply_access allow all

will do what you need

On Wed, Oct 21, 2015 at 9:36 PM, HackXBack  wrote:
> hello ,
> can we deny rep_mime_type for specific domain ?
> if yes then how
> if no then why
> thank you ..
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/deny-rep-mime-type-tp4673816.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Accessing squid from a url rather than proxy settings

2015-10-27 Thread Kinkie
Hi,
  a proxy is different from a webserver. The protocol they speak is
slightly different in order to support the use-cases (nothing would
technically prevent from having webservers use a language more similar
to proxies); what you are trying to do is to use Squid (the proxy) as
if it was a webserver, which it isn't.

On Tue, Oct 27, 2015 at 5:51 AM, Phil Allred  wrote:
> I want to have users access squid directly from a URL like this:
>
> http://my.squidserver.org:3128/testurl
>
> Rather than by setting a proxy in their browser.  Then I want squid to
> rewrite the URL “my.squidserver.org”  to the site I want users to access.
> The reason I want to do this is in order to access ONLY a certain research
> database through the proxy server, not all HTTP requests.
>
> When I set up squid to do url rewriting, everything works, if I configure my
> browser to use my proxy server.  However, when I try to access squid
> directly  like mentioned above, it refuses to try to rewrite the URL.  Squid
> just sends back an error like this:
>
> The following error was encountered while trying to retrieve the URL:
> /testurl
>
> Invalid URL
>
> Some aspect of the requested URL is incorrect.
>
>
>
> Is what I’m trying to do even possible?  If so, how do I fix my problem?
>
> Thanks in advance,
>
> Phil
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [feature request] Websocket Support for Intercept Mode

2015-10-27 Thread Kinkie
Hi Tarik,
   as far as I know, no developer is working on that.
The best way to get that feature would be to sponsor a developer to
implement it.

On Tue, Oct 27, 2015 at 11:54 AM, Tarik Demirci  wrote:
> Hello,
> Is there any plan to add support for websocket when using intercept mode?
>
> Currently, I use SslPeekAndSplice feature but this brokes many
> websites using websocket (one example is web.whatsapp.com). As a
> workaround, after peeking at step 1, splicing problematic sites and
> bumping the rest works. But maintaining this list is tiring and I
> can't use content filtering for these sites. It would be much better
> if squid had support for websocket.
>
>
> Related issue:
> http://bugs.squid-cache.org/show_bug.cgi?id=4349
> --
> Tarık Demirci
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-cache.org won't redirect to www.squid-cache.org?

2014-09-30 Thread Kinkie
Hi,
  It is one of the unfortunate side effects of the main server dying. The
sysadmins are working hard to restore services but our manpower is limited
so we have to prioritize.
Feel free to submit a but to the bugzilla database so it is not forgotten.
If you wish to help out,  please consider donating to the Squid Software
Foundation (see the "donate" page on the main site) or to join the
sysadmins team (contact me if you are interested). We are working hard to
ensure that such an outage never happens again.
On Sep 30, 2014 10:50 AM, "Amm"  wrote:

> I had pointed this out few months back but I suppose it was not corrected
> or not considered necessary.
>
> Amm.
>
> On 09/30/2014 02:15 PM, Дмитрий Шиленко wrote:
>
>> without "www.*"  -->> Forbidden You don't have permission to access / on
>> this server.
>>
>> Visolve Squid писал 30.09.2014 11:42:
>>
>>> Hi,
>>>
>>> The http://www.squid-cache.org/ domain web site is working fine.
>>>
>>> We have accessed the site a min ago.
>>>
>>> Regards,
>>> ViSolve Squid
>>>
>>> On 9/30/2014 1:47 PM, Neddy, NH. Nam wrote:
>>>
 Hi,

 I "accidentally" access squid-cache.org and get 403 Forbidden error,
 and am wondering why NOT redirect to WWW.squid-cache.org
 automatically?

 I'm sorry if it's intention.
 ~Ned
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

>>>
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ntlmssp: bad ascii: ffffffab (Lan Manager auth broken?)

2014-10-06 Thread Kinkie
Hi,


On Mon, Oct 6, 2014 at 10:24 AM, Victor Sudakov
 wrote:
> Colleagues,
>
> The NTLM (LM) plugin in squid27 worked perfectly while the NTLM plugin in
> squid34 is obviously broken.
>
> I am attaching two log files, one of the old plugin and the other of
> the new one. Could someone please have a look at bad-ntlm.log to see
> why ntlm_smb_lm_auth does not work any more after upgrading to 34?
>
> What does this failure
>
> ntlmssp: bad ascii: ffab
> No auth at all. Returning no-auth
> ntlm_smb_lm_auth.cc(531): pid=16346 :sending 'NA Logon Failure' to squid
>
> actually mean?
>
> I know that LM is bad and insecure, but I cannot give it up for the
> present in the production environment until I make Kerberos
> (negotiate) work.
>
> --
> Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
> sip:suda...@sibptus.tomsk.ru
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ntlmssp: bad ascii: ffffffab (Lan Manager auth broken?)

2014-10-06 Thread Kinkie
Whoops, sorry for the empty message.
This seems like a broken client. Can you check whether the client
sending that was a legitimate one?

On Mon, Oct 6, 2014 at 10:24 AM, Victor Sudakov
 wrote:
> Colleagues,
>
> The NTLM (LM) plugin in squid27 worked perfectly while the NTLM plugin in
> squid34 is obviously broken.
>
> I am attaching two log files, one of the old plugin and the other of
> the new one. Could someone please have a look at bad-ntlm.log to see
> why ntlm_smb_lm_auth does not work any more after upgrading to 34?
>
> What does this failure
>
> ntlmssp: bad ascii: ffab
> No auth at all. Returning no-auth
> ntlm_smb_lm_auth.cc(531): pid=16346 :sending 'NA Logon Failure' to squid
>
> actually mean?
>
> I know that LM is bad and insecure, but I cannot give it up for the
> present in the production environment until I make Kerberos
> (negotiate) work.
>
> --
> Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
> sip:suda...@sibptus.tomsk.ru
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ntlmssp: bad ascii: ffffffab (Lan Manager auth broken?)

2014-10-06 Thread Kinkie
er.. are you not using the helper provided by Samba? That is the most
reliable way to do NTLM authentication in squid (and most other Linux
software)

On Mon, Oct 6, 2014 at 11:08 AM, Victor Sudakov
 wrote:
> Francesco,
>
> What do you mean by "client"? Absolutely everything in this lab setup
> is the same, including the browser.
>
> The only difference is the ntlm plugin binary (ntlm_auth taken from
> the old squid and ntlm_smb_lm_auth from the new one).
>
> In fact, I replaced the binary and restarted squid.
>
> Kinkie wrote:
>> Whoops, sorry for the empty message.
>> This seems like a broken client. Can you check whether the client
>> sending that was a legitimate one?
>>
>> On Mon, Oct 6, 2014 at 10:24 AM, Victor Sudakov
>>  wrote:
>> > Colleagues,
>> >
>> > The NTLM (LM) plugin in squid27 worked perfectly while the NTLM plugin in
>> > squid34 is obviously broken.
>> >
>> > I am attaching two log files, one of the old plugin and the other of
>> > the new one. Could someone please have a look at bad-ntlm.log to see
>> > why ntlm_smb_lm_auth does not work any more after upgrading to 34?
>> >
>> > What does this failure
>> >
>> > ntlmssp: bad ascii: ffab
>> > No auth at all. Returning no-auth
>> > ntlm_smb_lm_auth.cc(531): pid=16346 :sending 'NA Logon Failure' to squid
>> >
>> > actually mean?
>> >
>> > I know that LM is bad and insecure, but I cannot give it up for the
>> > present in the production environment until I make Kerberos
>> > (negotiate) work.
>> >
>> > --
>> > Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
>> > sip:suda...@sibptus.tomsk.ru
>> >
>> > ___
>> > squid-users mailing list
>> > squid-users@lists.squid-cache.org
>> > http://lists.squid-cache.org/listinfo/squid-users
>> >
>>
>>
>>
>> --
>> Francesco
>
> --
> Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
> sip:suda...@sibptus.tomsk.ru



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid website malware?

2014-10-13 Thread Kinkie
Hi,
  could you please check again?
The sysadmins have verified, and they can't confirm what you reported.

Thanks,
  Francesco

On Wed, Oct 8, 2014 at 4:23 PM, Lawrence Pingree 
wrote:

>
>
>
>
>
>
>
> "Convert your dreams to achievable and realistic goals, this way the
> journey is satisfying and progressive." - LP
>
>
>
> Best regards,
>
> The Geek Guy
>
> [image: geek-logo-small-crop2]
>
> Lawrence Pingree
>
> http://www.lawrencepingree.com/resume/
>
>
>
> Author of "The Manager's Guide to Becoming Great"
>
> http://www.Management-Book.com 
>
>
>
> [image: cissp]
> 
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid in captive portal and reconfigure

2014-10-21 Thread Kinkie
Hi,
   it depends on how your helpers are implemented: the helpers
themselves are restarted upon reconfiguration, but if they use a
persistent storage for sessions, that is not touched.
Ongoing transfers are not stopped and are in any case allowed to complete.

On Wed, Oct 22, 2014 at 12:42 AM, Job  wrote:
>
> Hello,
>
> integrating squid in a captive portal environment, i have to setup different 
> profiles in order to apply restrictions dinamically.
>
> The squid -k reconfigure kill active sessione/connections?
>
> I tried when downloading a file, it stops for one/two seconds and then 
> continues download, but i am not sure if sessiones are dropped/renewed.
>
> Thank you,
> Francesco
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-dev] When i redirected to more traffic to squid box for testing goal via web-polygraph . find error info "OS probably ran out of ephemeral ports at 192.168.2.1:0"

2014-11-09 Thread Kinkie
You want to ask this question to the polygraph authors; squid users
and developers are not really the right place to ask.
I don't know about the details, but that error message probably means
that your bots are overwhelming the TCP stack, which can't free
ephemeral ports quickly enough due to TCP timeouts.

On Thu, Nov 6, 2014 at 8:36 AM, johnzeng  wrote:
>
> Hello :
>
> i meet a problem , When i redirected to more traffic to squid box for
> testing goal via web-polygraph .
>
> squidbox ip is 192.168.2.2 web-polygraph_box ip is 192.168.2.3
>
> /polygraph-client --config
> /accerater/testtool/share/polygraph/workloads/simple.pg --proxy
> 192.168.2.2:80 --verb_lvl 10
>
> ./polygraph-server --config
> /accerater/testtool/share/polygraph/workloads/simple.pg --verb_lvl 10
>
> When testing traffic is 1500request/sec , i found more error info ,
>
> but my os setting is net.ipv4.ip_local_port_range = 1024 65535
> open files (-n) 65536
> max user processes (-u) 1
> /proc/sys/fs/file-max 6815744
>
>
> and i found these error info still , how will i do ? if possible , give
> me some help or advisement please .
>
> **
> error info
> **
>
> EphPortMgr.cc:23: error: 34920/69877 (s98) Address already in use
> 005.28| OS probably ran out of ephemeral ports at 192.168.2.3:0
> 005.28| Client.cc:347: error: 34920/69878 (c63) failed to establish a
> connection
> 005.28| 192.168.2.3 failed to connect to 192.168.2.2:80
> 005.31| i-dflt 104811 0.00 -1 -1.00 3904 32336
> 005.59| PolyApp.cc:189: error: 39/75599 (c58) internal timers may be
> getting beh
>
> 005.90| EphPortMgr.cc:23: error: 64/129 (s98) Address already in use
> 005.90| OS probably ran out of ephemeral ports at 192.168.2.1:0
> 005.90| Client.cc:347: error: 64/130 (c63) failed to establish a connection
> 005.90| 192.168.2.1 failed to connect to 192.168.2.2:80
> 005.90| PolyApp.cc:189: error: 4/180 (c58) internal timers may be
> getting behind
> 005.90| record level of timing drift: 179msec; last check was 3msec ago
> 005.90| EphPortMgr.cc:23: error: 128/260 (s98) Address already in use
> 005.90| OS probably ran out of ephemeral ports at 192.168.2.1:0
> 005.90| Client.cc:347: error: 128/261 (c63) failed to establish a connection
> 005.90| 192.168.2.1 failed to connect to 192.168.2.2:80
> 005.91| PolyApp.cc:189: error: 8/460 (c58) internal timers may be
> getting behind
> 005.91| record level of timing drift: 383msec; last check was 3msec ago
>
>
>
>
>
>
>
>
>
> ***
> Web-polygraph configration
> ***
>
>
> Content SimpleContent = {
> size = exp(13KB); // response sizes distributed exponentially
> cachable = 80%; // 20% of content is uncachable
> };
>
> // a primitive server cleverly labeled "S101"
> // normally, you would specify more properties,
> // but we will mostly rely on defaults for now
> Server S = {
> kind = "S101";
> contents = [ SimpleContent ];
> direct_access = contents;
>
> addresses = ['192.168.2.1:9090']; // where to create these server agents
> };
>
> // a primitive robot
> Robot R = {
> kind = "R101";
> pop_model = { pop_distr = popUnif(); };
> recurrence = 55% / SimpleContent.cachable; // adjusted to get 55% DHR
> req_rate = 1600/sec;
>
>
> origins = S.addresses; // where the origin servers are
> addresses = ['192.168.2.1']; // where these robot agents will be created
> };
>
>
>
>
> **
> sysctl.conf
> **
>
>
> fs.file-max = 6815744
> fs.aio-max-nr = 1048576
> kernel.shmmax = 4294967295
> kernel.threads-max = 212992
> kernel.sem = 250 256000 100 1024
> net.core.rmem_max=5165824
> net.core.wmem_max=262144
> net.ipv4.tcp_rmem=5165824 5165824 5165824
> net.ipv4.tcp_wmem=262144 262144 262144
> net.core.rmem_default = 5165824
> net.core.wmem_default = 262144
> net.core.optmem_max = 25165824
> net.ipv4.ip_local_port_range = 1024 65535
> net.nf_conntrack_max = 6553600
> net.netfilter.nf_conntrack_tcp_timeout_established = 1200
>
>
> net.ipv4.tcp_tw_recycle = 1
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_fin_timeout = 10
> net.ipv4.tcp_orphan_retries = 3
> net.ipv4.tcp_retries2 = 5
> net.ipv4.tcp_keepalive_intvl = 15
> net.ipv4.tcp_syn_retries = 5
> net.ipv4.tcp_synack_retries = 2
> net.ipv4.tcp_keepalive_time = 1800
> net.core.netdev_max_backlog = 300
> net.ipv4.tcp_max_syn_backlog = 262144000
> net.ipv4.tcp_max_tw_buckets = 5
> net.core.somaxconn = 262144000
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_fack = 1
>
> net.ipv4.tcp_timestamps = 0
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_no_metrics_save=1
> net.ipv4.tcp_max_orphans = 26214400
> net.ipv4.tcp_synack_retries = 2
> net.ipv4.tcp_low_latency = 1
> net.ipv4.tcp_rfc1337 = 1
>
> ___
> squid-dev mailing list
> squid-...@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev



-- 
Francesco
___

Re: [squid-users] Centralized Squid - design and implementation

2014-11-16 Thread Kinkie
On Sun, Nov 16, 2014 at 4:54 PM, alberto  wrote:
> Hello everyone,
> first of all thanks to the community of squid for such a great job.

Hello Alberto,

[...]

> I have some questions that I would like to share with you:
>
> 1. I would like to leave the solution we are using now (wpad balancing). In
> a situation like the one I have described, centralized squid serving the
> spokes/branches, which is the best solution for clustering/HA? If one of the
> centralized nodes had to "die" I would like client machines not to remain
> "hanging" but to continue working on an active node without disruption. A
> hierarchy of proxy would be the solution?

If you want to maximize the efficiency of your balancing solution, you
probably want a slightly different approach: instead of using the
client-ip as hashing mechanism, you want to hash on the destination
host.
e.g. have a pac-file like (untested, and to be adjusted):

function FindProxyForURL(url, host) {
   var dest_ip = dnsResolve(host);
   var dest_hash= dest_ip.slice(-1) % 2;
   if (dest_hash)
 return "PROXY local_proxy1:port; PROXY local_proxy2:port; DIRECT";
   return "PROXY local_proxy2:port; PROXY local_proxy1:port; DIRECT"
}
This will balance by the final digit of the destination IP of the
service. The downside is that it requires DNS lookups by the clients,
and that if the primary local proxy fails, it takes a few seconds (up
to 30) for clients to give up and fail over to secondary.

local_proxies can then either go direct to the origin server (if
intranet) or use a balancing mechanism such as carp (see the
documentation for the cache_peer directive in squid) to maximize
efficiency, especially for Internet destinations.

The only single-point-of-failure at the HTTP level in this design is
the PACfile server, it'll be up to you to make that reliable.

> 2. Bearing in mind that all users will be AD authenticated, which url
> filtering/blacklist solution do you suggest?
> In the past I have worked a lot with squidguard and dansguardian but now
> they don't seem to be the state of the art anymore.
> I've been thinking about two different solutions:
>   2a. To use the native acl squid with the squidblacklist.org lists
> (http://www.squidblacklist.org/)
>   2b. To use urlfilterdb (http://www.urlfilterdb.com/products/overview.html)

I don't know, sorry.

> 3. Which GNU/Linux distro do you suggest me? I was thinking about Debian
> Jessie (just frozen) or CentOS7.

http://wiki.squid-cache.org/BestOsForSquid

-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fwd: Centralized Squid - design and implementation

2014-11-16 Thread Kinkie
Forwarding, as it may be useful to others.


-- Forwarded message --
From: Kinkie 
Date: Sun, Nov 16, 2014 at 6:27 PM
Subject: Re: [squid-users] Centralized Squid - design and implementation
To: alberto 


On Sun, Nov 16, 2014 at 5:53 PM, alberto  wrote:
> Hi Kinkie
>
> On Sun, Nov 16, 2014 at 5:22 PM, Kinkie  wrote:
>>
>>if (dest_hash)
>>  return "PROXY local_proxy1:port; PROXY local_proxy2:port; DIRECT";
>>return "PROXY local_proxy2:port; PROXY local_proxy1:port; DIRECT"
>> }
>> This will balance by the final digit of the destination IP of the
>> service.
>
>
> With this configuration i can only balance between two nodes in normal
> situation right?
> Whati if i would like to have more nodes balancing the traffic? In case of
> very high load for example.

The hashing is a bit simplicistic. You could do something like (again:
untested):

// if the code works, this is the only tuneable needed. Everything
else self-adjusts
var proxies = ["PROXY proxy1:port1", "PROXY proxy2:port2", "PROXY
proxy3.port3"];

function hash(host, buckets) { // returns a host-dependent integer
between 0 and buckets
  var hostip = dnsResolve(host);
  if (!hostip) // dns resolution failure
return 0;
  return hostip.slice(hostip.lastIndexOf(".")) % buckets;
}

function FindProxyForURL(url, host) {
  var h = hash(host, proxies.length+1);
  var p = proxies;
  for (var j = 0; j < h; ++j)
p.unshift(p.pop()); // rotate the "p" array
  return p.join("; ") + "; DIRECT";
}



--
Francesco


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-19 Thread Kinkie
One word of caution: pactester uses the Firefox JavaScript engine, which is
more forgiving than MSIE's. So while it is a very useful tool, it may let
some errors slip through.
On Nov 18, 2014 9:45 PM, "Jason Haar"  wrote:

> On 19/11/14 01:39, Brendan Kearney wrote:
> > i would suggest that if you use a pac/wpad solution, you look into
> > pactester, which is a google summer of code project that executes pac
> > files and provides output indicating what actions would be returned to
> > the browser, given a URL.
> couldn't agree more. We have it built into our QA to run before we ever
> roll out any change to our WPAD php script (a bug in there means
> everyone loses Internet access - so we have to be careful).
>
> Auto-generating a PAC script per client allows us to change behaviour
> based on User-Agent, client IP, proxy and destination - and allows us to
> control what web services should be DIRECT and what should be proxied.
> There is no other way of achieving those outcomes.
>
> Oh yes, and now that both Chrome and Firefox support proxies over HTTPS,
> I'm starting to ponder putting up some form of proxy on the Internet for
> our staff to use (authenticated of course!) - WPAD makes that something
> we could implement with no client changes - pretty cool :-)
>
> --
> Cheers
>
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Authentication\Authorization using a PAC file?

2014-11-24 Thread Kinkie
Hi Eliezer,
  I don't think so. PACfiles have no access to the DOM or facilities
like AJAX, and are very limited in what they can return or affect as
side-effects. In theory it could be possible to do something, but in
practice it would be only advisory and not secure: a pacfile must by
definition be in a publicly-accessible URL, so anyone can read it and
interpret it.

On Mon, Nov 24, 2014 at 11:25 AM, Eliezer Croitoru  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I do know that pac files contains some form of JS and in the past I
> have seen couple complex PAC files but unsure about the options.
> I want to know if a PAC file can be used for
> Authentication\Authorization, maybe even working against another
> external system to get a token?
>
> Thanks,
> Eliezer
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJUcweKAAoJENxnfXtQ8ZQUy7oH/ieegXDfKslc8NPYgzkRfpRW
> JVYcRB9gqVEQSEpphznVz3s4PTuspYYKmNnr1uWMnUQRC906GPaa326j+EMtQ9Eq
> mcPc2dBU7jyMkj5V4EUAJlMZ+29YzDFKSAAJkf4/cYX5ik1JKOMyIljaKF5O4PQU
> HNhSUVrQ+/9nkDE8puzALYYFygKn+u8exN2pr9ikobAgsGhoMMsULJxQi90st67S
> W9/Be12+2KiBxGWBwnTCNTZjRs5xAg/8xsLTOuMMzKPF0ihpDRcDFQFYZYF22uKM
> BQAZCG1VJWz8wwDrDN8Pmy7AbII2ygFvKu/8s6S7ZAdq7mragGVsyhJzVoQzqJc=
> =l9Ue
> -END PGP SIGNATURE-
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Authentication\Authorization using a PAC file?

2014-11-24 Thread Kinkie
Still it'd be semi-public; you'd have to replicate the access control
rules on the proxy anyway.

On Mon, Nov 24, 2014 at 1:46 PM, Eliezer Croitoru  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 11/24/2014 02:43 PM, Kinkie wrote:
>> Hi Eliezer, I don't think so. PACfiles have no access to the DOM or
>> facilities like AJAX, and are very limited in what they can return
>> or affect as side-effects. In theory it could be possible to do
>> something, but in practice it would be only advisory and not
>> secure: a pacfile must by definition be in a publicly-accessible
>> URL, so anyone can read it and interpret it.
>
> So a small question:
> Can I put the pac file on a https site with basic authentication?
>
> Eliezer
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJUcyihAAoJENxnfXtQ8ZQUbKwH/2iF+hTc0HO14k74+lw0ftj0
> tllr3v2uOIG5a3905B27sSxPvJa+XQ7mTOa0dRdvbmL9klyh0njdyKYVrs0ZjBB4
> 6VFUqMRsimu7gSpjuFZZIySMDy35XM+S3EyluQehiQJpwOfidgxHbF7iiehfd/+1
> yn0kk/AoxnisDMvRvlpKZAwnvTuZFjZoj+zMs0GfZgJ/skcNu2YDAKANAPon+uhm
> MIEf6Gi2zbwPCrsOnQXySPTG17trWPMGUvH3nVXbxFd+8amHSdBWy6O8iEhsFEPZ
> hXP9XyjfXbWIwrYvqhI+0lVc3BEr52tjIdpVV1pu5h9jsBfjJMFTQXvlhhCHHiw=
> =sVZf
> -END PGP SIGNATURE-



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Authentication\Authorization using a PAC file?

2014-11-24 Thread Kinkie
But what if multiple users share the same IP (e.g. Citrix, X11)?

On Mon, Nov 24, 2014 at 2:13 PM, Eliezer Croitoru  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 11/24/2014 03:05 PM, Kinkie wrote:
>> Still it'd be semi-public; you'd have to replicate the access
>> control rules on the proxy anyway.
> So we can use a http backend for authentication via a radius server
> and then get a token or something else while the proxy authenticate
> the user via the radius server(on a ip level).
>
> Eliezer
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJUcy8QAAoJENxnfXtQ8ZQUZ0UH/3+bPBj2VIGnlgbU/p4MruMR
> O8cBsiNCPtldaMA8kMeZl4A/D5ETEXw/NmutEASqSJZiQJWdYNu6C0gCt+rQPPA1
> 9ae4d3zUfJuCyiYFcl9IqlP5YtBIvry8J2ml9f5eSlEfpGwkddLZ2PKtfkixaDva
> TmNSBmsKgW410Wtyd24YipbpVyoOc8eXxwfH8b/1Evm4hRsDZdSg6H274yC5kTqc
> C3OxFXfej8uZQT9lUw0qKwsqwOu0e82fIuUxqzcxsAlH3MlIIze2LyLIgtdTRo+F
> iYcOjTSmMO92B3okbO79SI8ssABclF0LVARi1PdTqJhC0qic/WHgrKhomvXSkiM=
> =hxiL
> -END PGP SIGNATURE-



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Multiple SSL Domains on Reverse Proxy

2014-12-01 Thread Kinkie
Hi all,
  I've created bug 4153 to track progress.


On Mon, Dec 1, 2014 at 8:59 AM, Henrik Nordstrom  wrote:
>
> lör 2014-11-29 klockan 20:39 -0500 skrev Roman Gelfand:
>> Is it possible to listen on port 443 for requests for multiple domains
>> ie... www.xyz.com, www.mno.com, etc...?
>
> If you have one IP address per domain then it's just one https_port with
> explicit ip:port per domain, with vhost or defaultdomain= telling Squid
> what hostname to use as requested host in HTTP(S).
>
> Supporting more than one domain on the same ip:port is currently only
> possible if you use a multi-domain certificate.
>
> We really should support SNI negotiation to select certificate based on
> client requested domain. SNI is a TLS extension to indicate requested
> host during TLS negotiation and is quite well supported in todays
> browsers.  Patches implemententing this are very welcome.
>
> Regards
> Henrik
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Packet Marking for Traffic Shaping

2014-12-06 Thread Kinkie
Hello Osmany,
  have you tried http://wiki.squid-cache.org/Features/QualityOfService ?


Kinkie

On Fri, Dec 5, 2014 at 4:15 PM, Osmany Goderich  wrote:
> Hi everyone,
>
> I was googling and I couldn't find anything clear about the subject, but I
> am trying to make Squid mark the packets in order to differentiate traffic
> so that I can do Traffic Shaping on my hardware firewalls. This should help
> me do the job easier in my firewalls since the requests that go to internet
> come from only one IP (the proxy-cache) and I really need to identify
> different clients so that I can apply different traffic shaping rules. My
> firewall supports DSCP or ToS. I was looking up ToS but I am having a hard
> time to understand how can I apply different values of ToS to all my
> clients(I'm talking about more that 50 clients with different bandwidth to
> be assigned).
> Can anyone please help me with this?
>
> Thanks
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.10 incorrectly configured on Solaris 10

2014-12-18 Thread Kinkie
Hello Yuri,
  this is probably a system header dependency.
Could you check if the manuals mention anything about ipfmutex_t ? If
they do, at the beginning of the page they should include a list of
#include <...> lines. Could you copy-paste these lines here?

Thanks

On Thu, Dec 18, 2014 at 3:01 PM, Yuri Voinov  wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi there,
>
> yesterday (and during last four day) I've try to build transparent
> caching proxy on Solaris 10 (x86_64) testing environment.
>
> Configuration options are:
>
> # Without SSL 64 bit GCC
> ./configure '--prefix=/usr/local/squid' '--enable-translation'
> '--enable-external-acl-helpers=file_userip,unix_group'
> '--enable-icap-client' '--enable-ipf-transparent'
> '--enable-storeio=diskd' '--enable-removal-policies=lru,heap'
> '--enable-devpoll' '--disable-wccp' '--enable-wccpv2'
> '--enable-http-violations' '--enable-follow-x-forwarded-for'
> '--enable-arp-acl' '--enable-htcp' '--enable-cache-digests' '--with-dl'
> '--enable-auth-negotiate=none' '--disable-auth-digest'
> '--disable-auth-ntlm' '--disable-auth-basic'
> '--enable-storeid-rewrite-helpers=file'
> '--enable-log-daemon-helpers=file' '--with-filedescriptors=131072'
> '--with-build-environment=POSIX_V6_LP64_OFF64' 'CFLAGS=-O3 -m64 -fPIE
> -fstack-protector -mtune=core2 --param=ssp-buffer-size=4 -pipe'
> 'CXXFLAGS=-O3 -m64 -fPIE -fstack-protector -mtune=core2
> --param=ssp-buffer-size=4 -pipe' 'CPPFLAGS=-I/usr/include
> -I/opt/csw/include' 'LDFLAGS=-fPIE -pie -Wl,-z,now'
>
> But binaries built without interceptor support.
>
> Some investigation:
>
> Config.log has errors with ip_nat.h compilation:
>
> configure:27435: checking for netinet/ip_nat.h
> configure:27435: g++ -c -m64 -O3 -m64 -fPIE -fstack-protector
> -mtune=core2 --param=ssp-buffer-size=4 -pipe -march=native -std=c++11
> -I/usr/include -I/opt/csw/include -I/usr/include/gssapi
> -I/usr/include/kerberosv5 conftest.cpp >&5
> In file included from conftest.cpp:266:0:
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:98:2:
> error: 'ipfmutex_t' does not name a type
>   ipfmutex_t nat_lock;
>   ^
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:108:2:
> error: 'frentry_t' does not name a type
>   frentry_t *nat_fr; /* filter rule ptr if appropriate */
>   ^
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:112:2:
> error: 'ipftqent_t' does not name a type
>   ipftqent_t nat_tqe;
>   ^
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:113:2:
> error: 'u_32_t' does not name a type
>   u_32_t  nat_flags;
>   ^
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:114:2:
> error: 'u_32_t' does not name a type
>   u_32_t  nat_sumd[2]; /* ip checksum delta for data segment */
>   ^
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:115:2:
> error: 'u_32_t' does not name a type
>   u_32_t  nat_ipsumd; /* ip checksum delta for ip header */
>   ^
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:116:2:
> error: 'u_32_t' does not name a type
>   u_32_t  nat_mssclamp; /* if != zero clamp MSS to this */
>   ^
> /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:117:2:
> error: 'i6addr_t' does not name a type
>   i6addr_t nat_inip6;
>
> and so, configure does not see IP Filter finally, ergo cannot build
> interceptor.
>
> Yes, IP Filter installed in system. Yes, I've try to build 32 bit also.
> Yes, I've try to build on another system. Yes, I've try to play with
> configure option. Yes, I've try also development version 3.5.x - with
> the same result.
>
> Amos, need your help.
>
> Thanks in advance,
>
> WBR, Yuri
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBAgAGBQJUkt4vAAoJENNXIZxhPexGn9EH/3CUqof3f4xHNBuZIhC35Zup
> EgTYQGwUck0hq98GP+USC7C186qW3pscafTO82olbb55xb7Bpmw6b0YVgsVK9AJy
> u2IFnc6MQe1rhYl8NM5L9B5XC6K5gKb8P4UQYAirYPvu0XDxWJYd0N8HqL+8uI6+
> 3OtvrGnQZyCOHTuQ8Ubu2y3yDpjdUhjX7sCRER8QiLR/IMTyXAu2pmIpMISLTMK+
> wmI1xVfrafpg5TO+RzkwQFbWQhNUq1JqY6kttHb9D/Qg5eTw2ceFLYsrkTiuwpYv
> czjRk2J4F7WYmbFJ0sTwRqyAZtM8xC8b9dk4SjkqOEpgIE/wdnqCJp/yQbfo/kk=
> =LWVp
> -END PGP SIGNATURE-
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] DiskThreadsDiskFile::openDone squid 3.5.0.4

2014-12-26 Thread Kinkie
Nothing to worry about. The files were removed by some outside
software and were not found. Squid will manage the error and carry on.

On Fri, Dec 26, 2014 at 1:22 PM, HackXBack  wrote:
> Hello squid ,
> after using 3.5.0.4 on fresh debian system
> i see many errors in cache.log
>
> 2014/12/26 07:21:39 kid1|   /cache03/2/00/31/3123
> 2014/12/26 07:21:39 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:21:39 kid1|   /cache04/1/00/5F/5F16
> 2014/12/26 07:21:39 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:21:39 kid1|   /cache03/3/00/29/291F
> 2014/12/26 07:21:39 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:21:39 kid1|   /cache05/1/00/11/11F6
> 2014/12/26 07:21:45 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:21:45 kid1|   /cache03/3/00/17/176C
> 2014/12/26 07:21:46 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:21:46 kid1|   /cache02/6/00/15/15CE
> 2014/12/26 07:21:47 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:21:47 kid1|   /cache02/6/00/0B/0B07
> 2014/12/26 07:21:47 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:21:47 kid1|   /cache02/2/00/02/02B4
> 2014/12/26 07:22:09 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:09 kid1|   /cache03/4/00/03/0365
> 2014/12/26 07:22:12 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:12 kid1|   /cache03/2/00/1F/1F26
> 2014/12/26 07:22:12 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:12 kid1|   /cache03/6/00/1F/1F25
> 2014/12/26 07:22:13 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:13 kid1|   /cache04/6/00/1F/1F21
> 2014/12/26 07:22:15 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:15 kid1|   /cache05/2/00/1F/1F30
> 2014/12/26 07:22:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:21 kid1|   /cache02/6/00/1D/1D5A
> 2014/12/26 07:22:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:21 kid1|   /cache02/2/00/0C/0CB5
> 2014/12/26 07:22:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:21 kid1|   /cache03/5/00/01/0144
> 2014/12/26 07:22:31 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:31 kid1|   /cache02/2/00/25/2504
> 2014/12/26 07:22:31 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
> 2014/12/26 07:22:31 kid1|   /cache04/5/00/24/244D
> 2014/12/26 07:22:31 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
> directory
>
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/DiskThreadsDiskFile-openDone-squid-3-5-0-4-tp4668840.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users