Re: [squid-users] Multiple SSL Domains on Reverse Proxy

2014-12-01 Thread Henrik Nordstrom

lör 2014-11-29 klockan 20:39 -0500 skrev Roman Gelfand:
> Is it possible to listen on port 443 for requests for multiple domains
> ie... www.xyz.com, www.mno.com, etc...?

If you have one IP address per domain then it's just one https_port with
explicit ip:port per domain, with vhost or defaultdomain= telling Squid
what hostname to use as requested host in HTTP(S).

Supporting more than one domain on the same ip:port is currently only
possible if you use a multi-domain certificate.

We really should support SNI negotiation to select certificate based on
client requested domain. SNI is a TLS extension to indicate requested
host during TLS negotiation and is quite well supported in todays
browsers.  Patches implemententing this are very welcome.

Regards
Henrik


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid WCCP with multiple workers

2014-12-01 Thread Henrik Nordstrom
fre 2014-11-28 klockan 10:28 + skrev Stephen Baynes:
> Is WCCP supposed to work with Squid multiple workers?

In theory it should work.. but not sure it has been adapted for
multi-worker.

> It works with 1 worker. If we change the number of workers from 1 to 2
> we see it fail. The router no longer is aware of Squid and does not
> reroute the data to the Squid box.

Try making the wccp configuration conditional on worker id 1. If this
makes a difference then Squid needs to be extended to add a internal
wccp worker.

Regards
Henrik


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2014-12-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/11/2014 8:59 a.m., Doug Sampson wrote:
> 
> Thanks, Amos, for your pointers.
> 
> I've commented out all the fresh_patterns lines appearing above
> the last two lines.
> 
> I also have dropped diskd in favor of using aufs exclusively,
> taking out the min-size parameter. I've commented out the
> diskd_program support option. In the previous version of squid
> (2.7) I had split the cache_dir into two types with great success
> using coss and aufs. Previously I had only aufs and performance
> wasn't where I wanted it. Apparently coss is no longer supported in
> the 3.x version of squid atop FreeBSD.

COSS has been replaced with Rock storage type in Squid-3. They should
be used in roughly similar ways in terms of traffic optimization.

> 
> The pathname for the cache swap logs have been fixed. Apparently
> this came from a squid.conf example that I copied in parts. Would
> this be the reason why we are seeing the error messages in
> /var/log/messages regarding swapping mentioned in my original
> post?

No. I think that is coming out of the OS kernel memory management. It
uses the term "swap" as well in regards to disk backed virtual memory.

If your system is "swapping" (using that disk backed "swap memory")
while running Squid then you will get terrible performance as a matter
of course since the Squid cache index and cache_meme is often very
large in RAM and accessed often.


> 
> The hierarchy_stoplist line has been stripped out as you say it is 
> deprecated.
> 
> The mem .TSV file is attached herewith.
> 
> Currently I have the cache_dir located on the OS disk and all of
> the cache logging files on a second drive. Is this the optimal
> setup of cache-dir and logs?

I would do it the other way around. Logs are appended with a small
amount of data each transaction, whereas the main cache_dir has a
relativey large % of the bandwidth throughput being written out to it
constantly (less % in recent Squid, but still a lot). The dik most
likely to die early is the one holding cache_dir.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUfCoRAAoJELJo5wb/XPRjvtsIAJskATDCJDf4IZdapvHXeJAn
Ug8KkCrV4/HU5ajeG/Z0G7czvzU34LNXCDtblLWp2JD9LDBK9NRTlOO6q5IoTtAY
qqw5YXFrKbDt16A6bNHuVDn1Mxh+vHG64KG6cPabcG4lR3EnAlS8/WoBebAZDHCq
1Ds0LfPJJKNg6CEKiaXavwv+zv5rEsZBCZtQmq8+8hymn7ytCBOo/IN5+UKUkhZg
cEo3RgZYhVjo5msATRTjR83Ow+4MCEKaejFuRsFVlI6tapJlZ6u2M/0XgxGRbQyv
NqXJoxLunXlOMEftpyPWY52EOhH7XkQzTLLky+mzHQK4P9/jPMv8NuwUxZhS6Bk=
=Etqf
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: there are more than 100 regular expressions

2014-12-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 28/11/2014 10:23 p.m., navari.lore...@gmail.com wrote:
> I saw that the error does not preclude the use of the lines over
> the 100. I have no problem with the CPU ( 7 % ) . Only I do not
> like to see " Warning"
> 

The RE engine can scan for individual patterns easily enough but
copying patterns into libregex memory and scanning the entire URI 100+
times per transaction is quite an excessive amount of work for the
simple task being attempted.

Like Marcus said lists of domains are best matched using dstdomain ACL
type which does an optimized single scan of just the domain name
portion no matter how many entries there are.

If you do have no choice but to use RE, then manually combining
patterns is best. This warning is just an indication that you need to
pay some attention to reducing the count.

For example the list containing:
 facebook\.com
 fbcdn\.com

Can be reduced to:
  f(acebook|bcdn)\.com


If you are importing a public list of domains to block please
investigate whether your list source supports squid dstdomain ACL
formats. The best lists provide files with Squid dstdomain format
(which is also almost identical to the rbldnsd 'RHSBL' data format).

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUfC2DAAoJELJo5wb/XPRjQuoH/31JMC52DYzvZqp1xycEIlwU
BTmdZCXZNsnYNklKW0MmN+Li3C3K87d5O07og7EsovG0syFxXlJc5HSvEBgqwQ9v
iAqTLkrg23EMKmqU7cM+A6MhMcuCGK7r//JAQiCqG6JD0iDXS5V8GFTOv2FYLr5e
yHhJ3p5vbmh/K8Qx6JrThTwNq8h41g9ek1PRG+BQj9iem80ujK8m616dXqhJGB4g
3BvgSHbuhkSD9MfOcz1lkftR1+baBK8XtIn/Ue/MkEmveTzbOre+mXEOryZX9ny+
a9nL6ioOzzAIBVqzaLz00xhZkp7Lm2iifycn8p3p6tPi+zIxcp70TemruOON1uE=
=JLTR
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2014-12-01 Thread k simon
  Maybe your problem is related to sysctl mib tuning about 
swap/overcommit etc. I did not observed memory leak with squid 3.4.4, 
but FB 10 do swap frequently than old version.



Simon
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] long standing bug about aufs on freebsd 10

2014-12-01 Thread k simon

  I did not observed this bug with diskd, but diskd have less performance.

Simon


在 14/12/1 13:49, d...@getbusi.com 写道:

FWIW I have this bug in CentOS 6.




On Mon, Dec 1, 2014 at 4:48 PM, k simon mailto:chio1...@gmail.com>> wrote:

Hi, Lists,
AUFS can not run stable one day and report:

2014/11/30 07:10:15 kid1| WARNING: swapfile header inconsistent with
available data
2014/11/30 07:10:15 kid1| Could not parse headers from on disk object
2014/11/30 07:10:15 kid1| BUG 3279: HTTP reply without Date:
2014/11/30 07:10:15 kid1| StoreEntry->key:
553ABDC02632452B7204639E5DDA66D8
2014/11/30 07:10:15 kid1| StoreEntry->next: 0
2014/11/30 07:10:15 kid1| StoreEntry->mem_obj: 0x82178ed00
2014/11/30 07:10:15 kid1| StoreEntry->timestamp: -1
2014/11/30 07:10:15 kid1| StoreEntry->lastref: 1417302615
2014/11/30 07:10:15 kid1| StoreEntry->expires: -1
2014/11/30 07:10:15 kid1| StoreEntry->lastmod: -1
2014/11/30 07:10:15 kid1| StoreEntry->swap_file_sz: 0
2014/11/30 07:10:15 kid1| StoreEntry->refcount: 1
2014/11/30 07:10:15 kid1| StoreEntry->flags:
CACHABLE,PRIVATE,FWD_HDR_WAIT,VALIDATED
2014/11/30 07:10:15 kid1| StoreEntry->swap_dirn: -1
2014/11/30 07:10:15 kid1| StoreEntry->swap_filen: -1
2014/11/30 07:10:15 kid1| StoreEntry->lock_count: 2
2014/11/30 07:10:15 kid1| StoreEntry->mem_status: 0
2014/11/30 07:10:15 kid1| StoreEntry->ping_status: 2
2014/11/30 07:10:15 kid1| StoreEntry->store_status: 1
2014/11/30 07:10:15 kid1| StoreEntry->swap_status: 0
2014/11/30 07:10:15 kid1| assertion failed: store.cc:1876: "isEmpty()"

How can I workaround it ?




Simon
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.9 RPM release

2014-12-01 Thread Omid Kosari
Any news about ubuntu version ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-9-is-available-tp4668181p4668576.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 2.7.STABLE9 & Error with option deny_info from local requests

2014-12-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 29/11/2014 2:40 a.m., Mark Riede wrote:
> Hello,
> 
> I have a strange behavior with Squid 2.7.STABLE9 and local
> requests which should be intercept by the option deny_info.
> 

1) *Please* upgrade your Squid. 2.7 has been unsupported for more than 5
years now and is seriously out of date with modern web traffic needs.

2) "intercept" and how deny_info works are completely unrelated concepts.


> I am using Squid as a reverse proxy. I have configured a list of 
> subdomains (i.e. subdomain.domain.tld) in a file via the option 
> dstdomain, which will be forwarded to the defined cache peer.
> There is an additional list of domains (i.e. *.domain.tld) which
> match via wildcard to all other domains, which are not absolutely
> defined yet and will be forwarded to a custom error page via the
> option deny_info.
> 
> The problem is that requests forwarded to the ip of the server,
> i.e. 192.168.0.1, will be catched up by the option deny_info. But,
> when the request is forwarded to the ip of the localhost
> (127.0.0.1), the option deny_info will not match.

?? *all* traffic to the cache_peer is "forwarded to the ip 127.0.0.1"
since that IP *is* the cache_peer's IP.


127.0.0.1 (and ::1) are special purpose IP addresses. Traffic is not
permitted to go to/from it from a global-scope IP address. When you
send to/from a localhost IP then it is mandatory for the TCP system to
use the localhost IP as source.


> Now the strange behaviour is that requests to the ip of the
> localhost but with the destination domain subdomain.domain.tld will
> be answered successfully. I need a fix because clients get the
> custom error page for requests via http (NAT to 192.168.0.1) but
> not the same response via https (nginx to 127.0.0.1). I don´t know
> where or how I can fix this problem or do more debugging.


Why is nginx not forwarding proper HTTP messages with the relevant
Host: header contents to Squid? the HTTP message syntax is no
different when sent over TLS port 443 as when sent over TCP port 80.

- From your description it sounds like nginx is sending to Squid
messages with URL https://127.0.0.1/* which cannot be expected to
exist in your list of acceptible domains.


Also, why are you not using Squid to receive and direct both HTTP and
HTTPS traffic to the relevant servers?
 Squid accepts port 443 traffic with an https_port directive just fine.


> 
> # Config http_access allow localhost

The above rule permits all traffic from 127.0.0.1 to go through this
proxy *no matter what*. From your description that would be all
traffic arriving from nginx **AND** any traffic you direct at
127.0.0.1 IP from any other software.

It is a very bad thing to do, particularly for a reverse-proxy. Remove
it and traffic from nginx (and yoru 127.0.0.1 tests) will start to
obey the other rules. Not a complete fix, but required for Squid to
work as you expect.

> acl foo dstdomain "/file" acl foo_deny dstdom_regex "/ file _deny" 
> http_access allow foo

When testing this ACL with a raw-IP Squid will lookup reverse-DNS of
the IPand compare the result with contents of /file.
Meaning 127.0.0.1 == "localhost" --> is "localhost" one of the peer
hosted domain names? should not be.

> cache_peer 127.0.0.1 parent 8080 0 no-query originserver name=srv1
login=PASS
> cache_peer_access srv1 allow foo cache_peer_access srv1 deny all 
> deny_info ERR_FOO foo_deny http_access deny foo_deny http_access
> deny all


The rest of these rules match your intended behaviour of Squid.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUfDWaAAoJELJo5wb/XPRjfEYH/1KMTu400wRO0wmz6RURlUwB
6jVEqWGc+rYV1hvtjZe5PtvJW8nMxF6idP0SU2aj/GWoGmcusrq413sAsoQhwEYT
lGAlDhU08c6aQ5r7ZyNY9TMNip8OJS6NPYEWV07Nw34QuJcQXbHUEC9VTAjboQqa
VYfnrBZIbMXFY3wkdhGkyNm4m/Uz5scOZ2lKAVabhZ4wHEu/NVMYISD3mEIHNiT7
rLT9/dqZaj/KHn1Vb5Z31k3szija9ZMh2Gu6A5tg3TfpelBVrbCXFOzoHIJMN7es
eRScL64c2KZ1PMpTMrTUzq1ILcOIuXcVDSdcj610Tcp524u0ssQ1vteJqj8kFak=
=8Dhf
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Best way to deny access to URLs in Squid 3.3.x?

2014-12-01 Thread Rafael Akchurin
Hello Mirza,


We are building MSI installer for the Squid you are using (see sources at 
https://github.com/diladele/squid3-windows and http://squid.diladele.com/) - it 
is now very early Alpha and MSI for the qlproxyd will follow in a week or so. I 
will try to post the update when it is ready.


Fighting with latest Squid build with Cygwin and/or MinGW takes all the time 
and I am loosing the battle :(


Best regards,

Rafael



From: Mirza Dedic 
Sent: Tuesday, October 14, 2014 8:17 PM
To: Rafael Akchurin
Subject: RE: [squid-users] Best way to deny access to URLs in Squid 3.3.x?

I will be happy to test the native windows ICAP build. We are running SQUID 
3.3.3 on Windows via Cygwin.

Is there a source we can try to build from? I see you have them listed for 
various distributions, however cygwin is not on there.


From: rafael.akchu...@diladele.com
To: mirza.de...@outlook.com; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Best way to deny access to URLs in Squid 3.3.x?
Date: Tue, 14 Oct 2014 17:46:11 +

Hello Mirza,

I would humbly propose taking a look at any of the ICAP servers listed on 
http://www.squid-cache.org/Misc/icap.html.
BTW we are now preparing a native Windows ICAP build of qlproxy and would be 
glad if you could take a look.

Best regards,
Raf

From: Mirza Dedic mailto:mirza.de...@outlook.com>>
Date: Tuesday 14 October 2014 19:37
To: 
"squid-users@lists.squid-cache.org" 
mailto:squid-users@lists.squid-cache.org>>
Subject: [squid-users] Best way to deny access to URLs in Squid 3.3.x?

Just curious, what are some of you doing in your Squid environment as far as 
URL filtering goes? It seems there are a few options out there.. squidguard... 
dansguardian.. plain block lists.

What is the best practice to implement some sort of block list into squid? I've 
found urlblacklist.com that has a pretty good broken down URL block list by 
category, what would be the best way to go.. use dansguardian with this list or 
set it up in squid.conf as an "acl dstdomain" and feed in the block list file 
without calling an external helper application?

Thanks.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid WCCP with multiple workers

2014-12-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 1/12/2014 9:38 p.m., Henrik Nordstrom wrote:
> fre 2014-11-28 klockan 10:28 + skrev Stephen Baynes:
>> Is WCCP supposed to work with Squid multiple workers?
> 
> In theory it should work.. but not sure it has been adapted for 
> multi-worker.

Not that I am aware of.

I think each worker will be emitting separate WCCP packts to the
router. If they are all sharing http_port entries then there should be
no problem, as long as one worker is running the router will see the
proxy as alive. With differet http_port between workers they can
confuse the router and cause it to reset its traffic mappings up to
every 10 seconds.

There may also be problems with receiving back the WCCP reply from
router as no guarantee it goes to the right worker. Some may receive
multiple packets and some workers might think their router has gone
away as it dont reply often enough. This should only affect Squid
workers which have never received any router reply before receiving
HTTP, once they get one packet they cache the details until updated.

> 
>> It works with 1 worker. If we change the number of workers from 1
>> to 2 we see it fail. The router no longer is aware of Squid and
>> does not reroute the data to the Squid box.
> 
> Try making the wccp configuration conditional on worker id 1. If
> this makes a difference then Squid needs to be extended to add a
> internal wccp worker.
> 

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUfDrjAAoJELJo5wb/XPRjaE8H/2tSAA/9Yz2RDSgm/xjAqphB
Zz3jlGRSUDIlMDeVfH7p+2hhEGZ41yPQ9x0viWMDvd4NNHOY8/c+DoA5OkzjYT+9
Kl+2HyFGWy1EqY3oC3mREVxab4i/5NvXveZQg9Pf4imMPkQxzqD+hXp8RCmVsFms
RW+0RvqeW7Jb8uDZjTOGjLI+pXlTehpQOoLQmRUxy0WTT75m9xlnnTPtTo4zb1by
qxB0+PrC7Z/G7A8LYw9tFgEVF7KKHNLIkUJGibP/ojSs5PZ+Y+AS9HdphldfLEoG
EUSxafYcVyqDavm/nhMAWG//JPfcpY4ZyLsfRV2L9kbI01RaiHAep7Gf0GqUlGw=
=mO6b
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.9 RPM release

2014-12-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 1/12/2014 10:11 p.m., Omid Kosari wrote:
> Any news about ubuntu version ?

Feel free to grab the 3.4.8-2 packages from Debian official
repositories. It contains all the useful parts of Ubuntu packages
except upstart integration and should work fine on recent Ubuntu versions.

For older Debian/Ubuntu it is being worked on getting into the
"wheezy-backports" repository as well.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUfDvuAAoJELJo5wb/XPRjBOsIAMLLsGQJ57miiTFN5OKBhF+w
ckd6MHfM9r8G3oXSSu4rg/ogBJtSj1/3hwsrGf3YAtCauGK0TIWvGwPdRAmrJgWh
dTr5AbRyADNp5SY9xf057OdIi9DqIuYROoC68OVirD6UtYGzzsNR/zUyt6puwbxA
+Sa7JYUW6Wg04kAc0HCBCe8qbdPFWSbBUlnbz0G4nulhWTGCOSIZ7jrIGvP5NIVi
c2nsWaqctPelr0IQXcx/Tbps1hWmFOotSFJ1rPWrzGt+Ope4OvcD+fzupncN62Tl
ZuWfFnjMD1CHj4R+y55nv0u95PU2RwfZVuyWIA7wPKquIx3EojDLJL72cx5H4lk=
=+gqY
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid WCCP with multiple workers

2014-12-01 Thread Stephen Baynes
Thanks for the suggestion of making wccp conditional on worker 1.
Unfortunately it does not work.

Stephen

On 1 December 2014 at 08:38, Henrik Nordstrom  wrote:
> fre 2014-11-28 klockan 10:28 + skrev Stephen Baynes:
>> Is WCCP supposed to work with Squid multiple workers?
>
> In theory it should work.. but not sure it has been adapted for
> multi-worker.
>
>> It works with 1 worker. If we change the number of workers from 1 to 2
>> we see it fail. The router no longer is aware of Squid and does not
>> reroute the data to the Squid box.
>
> Try making the wccp configuration conditional on worker id 1. If this
> makes a difference then Squid needs to be extended to add a internal
> wccp worker.
>
> Regards
> Henrik
>
>



-- 
Stephen Baynes CEng MBCS CITP
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-3.5.0.2-20141031-r13657 crashes

2014-12-01 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 1/12/2014 9:44 a.m., James Harper wrote:
> I've been getting squid crashes with squid-3.5.0.2-20141031-r13657.
> Basically I think my cache got corrupt - started seeing
> TCP_SWAPFAIL_MISS and md5 mismatches. Config is cache_dir ufs
> /usr/local/squid/var/cache/squid 102400 16 256
> 
> It's possible that at one point I might have started 2 instances of
> squid running at once... could that cause corruption?

Yes, very likely. More so the longer they were both running.

I see you mention segfaults below, that can also cause it for any
objects in use at the time of the crash.

> 
> And if it happens again, what sort of things should I collect to
> better diagnose the problem?

Firstly, Squid is expected to cope with cache corruption by cleaning
up entries as it identifies them. The SWAPFAIL and MD5 details you
mention above are signs that the detection at least is occuring.

After any type of corruption these messages can be expected for a
while with an exponential drop-off in frequency as the cache gets
fixed. Only if it starts occuring with unknown cause or does not
decrease in frequency is there a serious problem to attend to (usually
a disk dying, Squid crashing a lot, or second Squid process started
wrongly).


The basic things required for bug reports
(http://wiki.squid-cache.org/SquidFaq/BugReporting). That also
includes investigation of the segfauls mentioned below.

Plus if you can the URL, HTTP request headers in full, any access.log
entry you can match up with the issue.

> As I see it there are two problems: 1. that the cache got corrupt
> in the first place 2. that a corrupt cache can crash squid

These may in fact be the reverse with cause (2) and effect (1). When a
segfault happens the details are not logged at all because the process
doing the logging is the Squid which has died.

Only in an assertion failure or exception error is Squid running well
enough and able to log why before exiting.


> 
> Unfortunately I did the stupid thing and deleted the cache without
> taking a copy for post-mortem... the best I can do is:
> 
> [31072.428922] squid[6317]: segfault at 58 ip 0061a6f9 sp
> 7fff8b9e2d40 error 4 in squid[40+4e9000] [31654.707792]
> squid[6329]: segfault at 58 ip 0061a6f9 sp 7fff54358fe0
> error 4 in squid[40+4e9000] [31783.399832] squid[6465]:
> segfault at 58 ip 0061a6f9 sp 7fff82af0aa0 error 4 in
> squid[40+4e9000] [31984.470507] squid[6509]: segfault at 58 ip
> 0061a6f9 sp 7fff028a6640 error 4 in
> squid[40+4e9000] [32178.270298] squid[6576]: segfault at 58 ip
> 0061a6f9 sp 7fffe64a07e0 error 4 in
> squid[40+4e9000] [32789.635935] squid[6626]: segfault at 58 ip
> 0061a6f9 sp 76932960 error 4 in
> squid[40+4e9000]
> 
> addr2line -e /usr/local/squid/sbin/squid 0061a6f9 
> /usr/local/src/squid-3.5.0.2-20141031-r13657/src/store.cc:962
> 

This looks like http://bugs.squid-cache.org/show_bug.cgi?id=4131. The
two patched posted there seem to be getting good results.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUfEVFAAoJELJo5wb/XPRjgVkH/1gj4utqZanu9jtLFEaBuV9Q
c4K+pw7grL8s1CH8vKvyNZnhSiDi7FsPPaG1RQTkJplhsqswZ2rUcLkVwAEHb2Ug
cS4uH9y5nN/M+O7yqmx/29JS1ITaXnR2ooy8PctKZoYqizEIz6UhDDd2vFuKiPFJ
rlhP+gvd8fDACtZgWLnojl6OmrFXmD0RyxZE0r8Y5wQyzIkbqveJfHzRcl7hkZJh
xJLPfiakK0RBHQSEDRJg/Jui8hv2UeaqGd/YJcF+XJZW6USY6tB8sVnwd8zir6Aw
q2VVbofu2YRn7RJmUrwwppvbmQ+j9ykRS5VMFkJrDVGrf0RohIqn1d7OO3pNJu0=
=DOp+
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Multiple SSL Domains on Reverse Proxy

2014-12-01 Thread Kinkie
Hi all,
  I've created bug 4153 to track progress.


On Mon, Dec 1, 2014 at 8:59 AM, Henrik Nordstrom  wrote:
>
> lör 2014-11-29 klockan 20:39 -0500 skrev Roman Gelfand:
>> Is it possible to listen on port 443 for requests for multiple domains
>> ie... www.xyz.com, www.mno.com, etc...?
>
> If you have one IP address per domain then it's just one https_port with
> explicit ip:port per domain, with vhost or defaultdomain= telling Squid
> what hostname to use as requested host in HTTP(S).
>
> Supporting more than one domain on the same ip:port is currently only
> possible if you use a multi-domain certificate.
>
> We really should support SNI negotiation to select certificate based on
> client requested domain. SNI is a TLS extension to indicate requested
> host during TLS negotiation and is quite well supported in todays
> browsers.  Patches implemententing this are very welcome.
>
> Regards
> Henrik
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: there are more than 100 regular expressions

2014-12-01 Thread Marcello Romani

Il 01/12/2014 09:57, Amos Jeffries ha scritto:

[...]


If you are importing a public list of domains to block please
investigate whether your list source supports squid dstdomain ACL
formats. The best lists provide files with Squid dstdomain format
(which is also almost identical to the rbldnsd 'RHSBL' data format).

Amos



May I suggest: http://www.squidblacklist.org/

I have been using it for almost a year now and it has done a good job 
for me. Over time I've just had to add a dozen entries and allow some 
specific URLs to tailor it to my specific company.


--
Marcello Romani
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2014-12-01 Thread Doug Sampson
>Maybe your problem is related to sysctl mib tuning about
> swap/overcommit etc. I did not observed memory leak with squid 3.4.4,
> but FB 10 do swap frequently than old version.

Could you elaborate a bit more? That went over my head. What could I do in 
terms of tuning the system?

~Doug
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: there are more than 100 regular expressions

2014-12-01 Thread FredB
Maybe you should use a tool that has been created for the only purpose of 
filtering web sites. 
Like, e2guardian, squidguard, etc

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem with digest authentification and credential backend

2014-12-01 Thread FredB
> 
> Ok, thanks,
> 
> Tested with both nonce_count and nonce_max_duration, no problem. Do
> you known if it works with squid 3.5 ?
> 
> 

No sorry I don't know, but if the patch can be applied I guess that yes it can 
works.
Except if there are some changes in DIGEST between 3.4 and 3.5.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2014-12-01 Thread k simon
  I used the ugly tuning: set vm.defer_swapspace_pageouts to 1.  But it 
is may caused some issue when the physic memory is really exhausted. I 
have no much time to investigate the right way, but I think maybe 
vm.swap_idle_threshold1/vm.swap_idle_threshold2 or vm.overcommit etc. 
maybe harmful.


Simon

在 14/12/1 23:16, Doug Sampson 写道:

Maybe your problem is related to sysctl mib tuning about
swap/overcommit etc. I did not observed memory leak with squid 3.4.4,
but FB 10 do swap frequently than old version.


Could you elaborate a bit more? That went over my head. What could I do in 
terms of tuning the system?

~Doug


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5x: Active Directory accounts with space issue

2014-12-01 Thread David Touzeau


Le 30/11/2014 09:08, Amos Jeffries a écrit :

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 30/11/2014 12:52 a.m., David Touzeau wrote:

Le 26/11/2014 11:27, Amos Jeffries a écrit : On 24/11/2014 12:01
a.m., David Touzeau wrote:

Hi

We have connected 3.5.0.2-20141121-r13666 with Active
Directory. It seems where there are spaces in login account
squid use only the last argument.

For example for an account "Jhon smith" squid use "smith"
only For example for an account "Dr Jhon smith" squid use
"smith" only

In 3.3.13 there is no such issue, a "Jhon smith" account is
logged as "Jhon smith" and sended as Jhon%20smith to helpers

Any information about the auth Scheme being performed? the helpers
being used? and what is being sent to/from the helpers in 3.5
different from the 3.3 version?

Amos


___ squid-users
mailing list squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Hi

I'm using this method

auth_param ntlm program /usr/bin/ntlm_auth --domain=TOUZEAU.BIZ
--helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 25
startup=5 idle=1 auth_param ntlm keep_alive off #Dynamic ACLs
groups Enabled: [1] external_acl_type ads_group ttl=3600
children-max=5 children-startup=1 children-idle=1 %LOGIN
/usr/share/artica-postfix/external_acl_squid_ldap.php #Other
settings authenticate_ttl 1 hour
authenticate_cache_garbage_interval 10 seconds authenticate_ip_ttl
60 seconds # END NTLM Parameters 
#Basic authentication for other browser that did not supports
NTLM: (KerbAuthMethod =  ) auth_param basic program
/usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param
basic children 3 startup=1 idle=1 auth_param basic realm Basic
Identification auth_param basic credentialsttl 2 hours


On 3.3.13, everything works as expected. On 3.5x LOGIN are
truncated where there is space on account.

By "LOGIN" are you meaning the log entries for user name labels?
  the %LOGIN format code delivered to the external ACL helper?
  the user=X labels delivered by the NTLM helper to Squid?
  or the generic "login" concept?

The 'old' helper protocol was whitespace delimited set of fields with
fixed meaning for each column/field. If the helper is delivering an
un-encoded SP character inside an old-style response to Squid it will
be parsed as two values.
  The 3.4+ helpers are parsing that protocol and upgrading it to the
new kv-pair protocol automatically. Garbage fields are discarded from
the input.

It looks like the 2-column AF (NTLM) response being confused for a
3-column AF (Kerberos) response. Since the only difference between the
two helpers outputs is the presence of a "token" column before the
username field.

You can workaround it with a script to convert the protocol explicitly
before delivering to Squid.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUetBqAAoJELJo5wb/XPRja6YH/1PpeTPb+BcfvWTKnsxDcy1O
deM+KEBK3nPz2IjTj6In73cH/UIkoFZaKIOViSR8MyjFtg517mz54tQcWWMkLIUQ
CId00veZcSlbpI1oJlg/eds6o0UXj+TZ4KpFGzLCnxLrAzwW93bneRuj6VeGUlpY
wlWwutZKFFlY1mHfIzlOkCE0f3AJZ/bK6XKP0x6UOfCzXjX4V/MW8KyhwCJXE0rz
Vr04GoJbMxSKR5JhMVZJV2uPteW9qFvX2efEkZA4coyV/E78YEp800et07eE+hRO
3O5Wswq7Lh+aZ0cMrjbdV/l4jcC/1UQnd9lM9rkiqoA3aXn63i5aUjxpbJJ9PWk=
=uEUQ
-END PGP SIGNATURE-

Thanks Amos.

I'm agree but helper answer just to OK if the user is a member of a 
group it doesn't send user=something

After removing the helper, Squid still write the truncated login
So i'm talking about the generic login concept.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Forward proxy with BASIC authentication

2014-12-01 Thread fzab_
Hi,
I want to use Squid locally on my computer to forward all traffic to a
parent Squid proxy which uses BASIC authentication. The aim is to not store
my password on every configuration file that needs internet access.

So here's the only lines I added to Squid default conf file :

/cache_peer x.x.x.x   parent  3128 0 no-query default  login=login:password 
no-digest 
never_direct allow all/

It seems to work for a few minutes, but it doesn't seem to authenticate
again when needed to. The access log shows 407 errors when it breaks. :

/297 127.0.0.1 TCP_MISS/407 2071 GET http://./

Am I missing something, when I take a look at the sent requests, none have
an authentication header?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Forward-proxy-with-BASIC-authentication-tp4668592.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslcrtvalidator_program

2014-12-01 Thread WorkingMan
Spam detection software, running on the system "master.squid-cache.org",
has identified this incoming email as possible spam.  The original
message has been attached to this so you can view it or label
similar future email.  If you have any questions, see
@@CONTACT_ADDRESS@@ for details.

Content preview:  I am using sample validator script called cert_valid.pl. 
Everything
   is working as is (I can see stuff in the log in debug mode) but I could not
   change the behavior when there is an error. For example when I receive an
   error (X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE) that I want to return
  OK instead of ERR but SQUID still shows the error page in the browser. [...]
   

Content analysis details:   (5.6 points, 5.0 required)

 pts rule name  description
 -- --
 1.0 FORGED_YAHOO_RCVD  'From' yahoo.com does not match 'Received' headers
 0.0 FREEMAIL_FROM  Sender email is commonly abused enduser mail 
provider
(signup_mail2002[at]yahoo.com)
 0.0 URIBL_BLOCKED  ADMINISTRATOR NOTICE: The query to URIBL was 
blocked.
See

http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block
 for more information.
[URIs: cert_valid.pl]
 0.9 SPF_FAIL   SPF: sender does not match SPF record (fail)
[SPF failed: Please see 
http://www.openspf.org/Why?s=mfrom;id=gcwsg-squid-users%40m.gmane.org;ip=192.0.186.121;r=master.squid-cache.org]
 0.9 RCVD_NUMERIC_HELO  Received: contains an IP address used for HELO
 0.0 T_HEADER_FROM_DIFFERENT_DOMAINS From and EnvelopeFrom 2nd level mail
domains are different
 0.0 UNPARSEABLE_RELAY  Informational: message has unparseable relay lines
 0.0 T_FREEMAIL_FORGED_FROMDOMAIN 2nd level domains in From and
EnvelopeFrom freemail headers are different
 1.3 RDNS_NONE  Delivered to internal network by a host with no rDNS
 1.5 FSL_HELO_BARE_IP_2 No description available.


--- Begin Message ---
I am using sample validator script called cert_valid.pl. Everything is 
working as is (I can see stuff in the log in debug mode) but I could not 
change the behavior when there is an error.

For example when I receive an error 
(X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE) that I want to return OK 
instead of ERR but SQUID still shows the error page in the browser.

I made both cases return OK just to see if from validator I can change 
SQUID behavior so we can customize the result.

  $response = "";
my $len = length($response);
if ($haserror) {
$response = $channelId." OK ".$len." ".$response."\1";
} else {
$response = $channelId." OK ".$len." ".$response."\1";
}

Let me know if there is a way to modify SQUID behavior from the validator 
program.

Thanks

--- End Message ---
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users