Re: [squid-users] Recommended squid settings when using IPS-based domain blocking

2024-03-06 Thread brendan kearney
tell the team that is running the IPS to change their policy from DROP
to something else, so you are not a captive audience to the timeout.
By sending a RST, they can cause Squid to close the connection and
fail faster.  if they are intercepting the DNS request, have them
leverage an RPZ and send a NXDOMAIN response.  there are probably
other options to consider, too, and a conversation about how to handle
these scenarios should have been had before they moved to a Prevent
posture.

in short they made decisions in a vacuum and didnt include all
impacted teams (up or downsteam) that their actions affected.  this,
as a policy problem, should be addressed with leadership.

HTH
brendan

On Wed, Mar 6, 2024 at 9:58 AM Alex Rousskov
 wrote:
>
> On 2024-03-06 09:48, Jason Marshall wrote:
>
> > We have been using squid (version squid-5.5-6.el9_3.5) under RHEL9 as a
> > simple pass-through proxy without issue for the past month or so.
> > Recently our security team implemented an IPS product that intercepts
> > domain names known to be associated with malware and ransomware command
> > and control. Once this was in place, we started having issues with the
> > behavior of squid.
> >
> > Through some troubleshooting, it appears that what is happening is that
> > that when a user's machine make a request through squid for one of these
> > bad domains, the request is dropped by the IPS, squid waits for the DNS
> > timeout, and then all requests made to squid after that result
> > in NONE_NONE/500 errors, and it never seems to recover until we do a
> > restart or reload of the service.
>
>
> DNS errors, including DNS query timeouts, are common, and Squid is
> supposed to handle them well. Assuming the DNS server is operational,
> what you describe sounds like a Squid bug. Lots of bugs were fixed since
> Squid v5.5, but I do not recall any single bug that would have such a
> drastic outcome.
>
> Squid v5 is not supported by the Squid Project. I recommend upgrading to
> the latest Squid v6 and retesting.
>
>
> HTH,
>
> Alex.
>
>
> > Initially the dns_timeout was set for 30 seconds. I reduced this,
> > thinking that perhaps requests were building up or something along those
> > lines. I set it to 5 seconds, but that just got us to a failure state
> > faster.
> >
> > I also found the negative_dns_ttl setting and thought it might be having
> > an effect, but setting this to 0 seconds resulted in no change to the
> > behavior.
> >
> > Are there any configuration tips that anyone can provide that might work
> > better with dropped/intercepted DNS requests? My current configuration
> > is included here:
> >
> > acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
> > acl localnet src 10.0.0.0/8  # RFC 1918
> > local private network (LAN)
> > acl localnet src 100.64.0.0/10   # RFC
> > 6598 shared address space (CGN)
> > acl localnet src 169.254.0.0/16  # RFC
> > 3927 link-local (directly plugged) machines
> > acl localnet src 172.16.0.0/12   # RFC
> > 1918 local private network (LAN)
> > acl localnet src 192.168.0.0/16  # RFC
> > 1918 local private network (LAN)
> >
> > acl localnet src fc00::/7   # RFC 4193 local private network
> > range
> > acl localnet src fe80::/10  # RFC 4291 link-local (directly
> > plugged) machines
> >
> > acl SSL_ports port 443
> > acl Safe_ports port 80  # http
> > acl Safe_ports port 443 # https
> > acl Safe_ports port 9191# papercut
> > http_access deny !Safe_ports
> > http_access allow localhost manager
> > http_access deny manager
> >
> > http_access allow localnet
> > http_access allow localhost
> > http_access deny all
> > http_port 0.0.0.0:3128 
> > http_port 0.0.0.0:3129 
> > cache deny all
> > coredump_dir /var/spool/squid
> > refresh_pattern ^ftp:   144020% 10080
> > refresh_pattern ^gopher:14400%  1440
> > refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> > refresh_pattern .   0   20% 4320
> > debug_options rotate=1 ALL,2
> > negative_dns_ttl 0 seconds
> > dns_timeout 5 seconds
> >
> > Thank you for any help that you can provide.
> >
> > Jason Marshall
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > https://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] FATAL: assertion failed: peer_digest.cc:399: "fetch->pd && receivedData.data"

2023-12-06 Thread Brendan Kearney

list members,

i am running squid 6.5 on fedora 38, and have found this issue when 
running "cache sharing" (or cache_peer siblings) between my 3 squid 
instances.  a couple weeks ago, this was happening and an update seems 
to have fixed the majority of issues.  when i ran into the issue, i 
could disable cache_peer siblings and restart the instances that 
failed.  the recent update seemed to have addressed the problem, but i 
turned on ssl bump for a subset of traffic and the issue returned.


when i have all 3 proxies started, all of them will work for a period of 
time, then slowly all but one of the proxies will die. the last standing 
proxy does not go down because all other cache_peer siblings are 
offline, and the logic causing the failure does not execute.


this was a larger issue before the recent patch/update, but is still an 
issue when performing ssl bumping on traffic.  is there something i can 
provide in the way of logs or diagnostics to help identify the issue?


thanks,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [DMARC] log_db_daemon errors

2023-11-03 Thread Brendan Kearney

On 11/3/23 8:27 AM, Amos Jeffries wrote:

On 3/11/23 08:14, jose.rodriguez wrote:

On 2023-11-02 13:46, Brendan Kearney wrote:

list members,

i am trying to log to a mariadb database, and cannot get the 
log_db_daemon script working.  i think i have everything setup, but 
an error is being thrown when i try to run the script manually.


/usr/lib64/squid/log_db_daemon 
/database:3306/squid/access_log/brendan/pass


Connecting... dsn='DBI:mysql:database=squid:database:3306', 
username='brendan', password='...' at /usr/lib64/squid/log_db_daemon 
line 399.



(Replied without looking and it did not go to the list, but to the 
personal email, so will repeat it for completeness...)



That DSN seems wrong, as far as I can find it should look like this:

DBI:mysql:database=$database;host=$hostname;port=$port

Something is not being 'fed' right to the script?



Thank you for the catch. I have now opened this to fix it:
<https://github.com/squid-cache/squid/pull/1570>


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


in reading the description in the man page/perl script, i find that the 
only supported log format is the native squid format.  i have a custom 
log format that i use to log via syslog, and wonder what limitations 
exist in trying to expand the capability of the log_db_daemon.  i have 
the custom log format, and corresponding table structure for it.  is the 
effort involved more than just adding columns to the table, then 
updating the @db_fields and insert routine?


thanks,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_db_daemon errors

2023-11-02 Thread Brendan Kearney

On 11/2/23 2:51 PM, Brendan Kearney wrote:

On 11/2/23 2:49 PM, Francesco Chemolli wrote:

Hi Robert,
  are you sure that you have the required packages on your system?
You'll need perl-DBD-MariaDB and what it depends on



On Thu, Nov 2, 2023 at 6:41 PM Brendan Kearney  wrote:

On 11/2/23 2:14 PM, Robert 'Bobby' Zenz wrote:
>>> Use of uninitialized value $DBI::errstr in concatenation (.) or
>>> string at /usr/lib64/squid/log_db_daemon line 403.
>> You're trying to use an uninitialized variable when
outputting(?) the
>> error message. Fix that first. I'm guessing you're using the
`errstr`
>> function wrong there, see the official documentation for hints:
>> https://metacpan.org/pod/DBD::MariaDB
>>
>>> Cannot connect to database:  at
/usr/lib64/squid/log_db_daemon line
>>> 403.
>> And then you should see what error you're actually getting
here. My
>> guess is that it will be a permission issue. User not allowed to
>> connect from this host, or process not allowed to access the
socket or
>> something similar.
> My apologies, I missed that that might not be a script you've
written.
> I guess it is a ready-made script?
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users

yes, this is the script packaged with squid from the fedora
repos.  i
will try to correct the script, which i believe may be victim to
newer
syntax in an updated perl version or something like that. we'll see
what comes of it...

thanks,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users



--
    Francesco


got that...

[root@server3 bin]# rpm -qa |grep perl |grep -i maria

perl-DBD-MariaDB-1.22-4.fc38.x86_64


original script:

# perform db connection
my $dsn = "DBI:mysql:database=$database" . ($host ne "localhost" ? 
":$host" : "");

my $dbh;
my $sth;
eval {
    warn "Connecting... dsn='$dsn', username='$user', password='...'";
    $dbh = DBI->connect($dsn, $user, $pass, { AutoCommit => 1, 
RaiseError => 1, PrintError => 1 });

    };
if ($EVAL_ERROR) {
    die "Cannot connect to database: $DBI::errstr";
}

hacked up, but seemingly working, mods:

# perform db connection
    # my $dsn = "DBI:mysql:database=$database" . ($host ne "localhost" 
? ":$host" : "");

my $dsn = "DBI:MariaDB:database=$database;host=$host";
my $dbh;
my $sth;
eval {
    # warn "Connecting... dsn='$dsn', username='$user', 
password='...'";
    $dbh = DBI->connect($dsn, $user, $pass, { AutoCommit => 1, 
RaiseError => 1, PrintError => 1 });

    };
if ($EVAL_ERROR) {
    # die "Cannot connect to database: $DBI::errstr";
    die;
}

i am by far not a developer, so i cannot say what should be in the 
script.  brute forcing it got me to the mods shown above.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_db_daemon errors

2023-11-02 Thread Brendan Kearney

On 11/2/23 2:49 PM, Francesco Chemolli wrote:

Hi Robert,
  are you sure that you have the required packages on your system?
You'll need perl-DBD-MariaDB and what it depends on



On Thu, Nov 2, 2023 at 6:41 PM Brendan Kearney  wrote:

On 11/2/23 2:14 PM, Robert 'Bobby' Zenz wrote:
>>> Use of uninitialized value $DBI::errstr in concatenation (.) or
>>> string at /usr/lib64/squid/log_db_daemon line 403.
>> You're trying to use an uninitialized variable when
outputting(?) the
>> error message. Fix that first. I'm guessing you're using the
`errstr`
>> function wrong there, see the official documentation for hints:
>> https://metacpan.org/pod/DBD::MariaDB
>>
>>> Cannot connect to database:  at /usr/lib64/squid/log_db_daemon
line
>>> 403.
>> And then you should see what error you're actually getting here. My
>> guess is that it will be a permission issue. User not allowed to
>> connect from this host, or process not allowed to access the
socket or
>> something similar.
> My apologies, I missed that that might not be a script you've
written.
> I guess it is a ready-made script?
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users

yes, this is the script packaged with squid from the fedora repos.  i
will try to correct the script, which i believe may be victim to
newer
syntax in an updated perl version or something like that. we'll see
what comes of it...

thanks,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users



--
    Francesco


got that...

[root@server3 bin]# rpm -qa |grep perl |grep -i maria

perl-DBD-MariaDB-1.22-4.fc38.x86_64___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_db_daemon errors

2023-11-02 Thread Brendan Kearney

On 11/2/23 2:14 PM, Robert 'Bobby' Zenz wrote:

Use of uninitialized value $DBI::errstr in concatenation (.) or
string at /usr/lib64/squid/log_db_daemon line 403.

You're trying to use an uninitialized variable when outputting(?) the
error message. Fix that first. I'm guessing you're using the `errstr`
function wrong there, see the official documentation for hints:
https://metacpan.org/pod/DBD::MariaDB


Cannot connect to database:  at /usr/lib64/squid/log_db_daemon line
403.

And then you should see what error you're actually getting here. My
guess is that it will be a permission issue. User not allowed to
connect from this host, or process not allowed to access the socket or
something similar.

My apologies, I missed that that might not be a script you've written.
I guess it is a ready-made script?
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


yes, this is the script packaged with squid from the fedora repos.  i 
will try to correct the script, which i believe may be victim to newer 
syntax in an updated perl version or something like that.  we'll see 
what comes of it...


thanks,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] log_db_daemon errors

2023-11-02 Thread Brendan Kearney

list members,

i am trying to log to a mariadb database, and cannot get the 
log_db_daemon script working.  i think i have everything setup, but an 
error is being thrown when i try to run the script manually.


/usr/lib64/squid/log_db_daemon /database:3306/squid/access_log/brendan/pass

Connecting... dsn='DBI:mysql:database=squid:database:3306', 
username='brendan', password='...' at /usr/lib64/squid/log_db_daemon 
line 399.
Use of uninitialized value $DBI::errstr in concatenation (.) or string 
at /usr/lib64/squid/log_db_daemon line 403.

Cannot connect to database:  at /usr/lib64/squid/log_db_daemon line 403.

i have no idea what is going wrong, but i cannot get any more detail 
about what is missing or malformed.  any ideas?


thanks,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] sharing generated certs between squid instances

2023-08-26 Thread Brendan Kearney

list members,

i have a couple squid instances that are performing bump/peek/splice and 
generating dynamic certs.  i want to share the certs that are generated 
by the individual instances across the rest of them, via NFS or some 
shared mechanism.  so, if squid1 creates a certs i want squid2, squidN 
to be able to leverage that cert and not have to create the cert again.


having tried to put the certs on a NFS share, i am seeing that all of 
the instances run into file locking issues when updating the database 
file "index.txt".


is there any way to share the certs between instances to save processing 
power/time?


thanks in advance,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cachemgr.cgi & Internal Error: Missing Template MGR_INDEX

2023-07-30 Thread Brendan Kearney
.168.88.1,server1.bpk2.com,-,28/Jul/2023:13:09:50 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","cache_object://server2.bpk2.com/menu","cachemgr.cgi/4.14",200,3817,-,"TCP_MISS/HIER_NONE","text/plain"
Jul 28 13:09:30 server2 (squid-1)[227457]: 
192.168.88.1,server1.bpk2.com,-,28/Jul/2023:13:09:30 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","cache_object://server2.bpk2.com/config","cachemgr.cgi/4.14",200,18284,-,"TCP_MISS/HIER_NONE","text/plain"
Jul 28 13:09:26 server2 (squid-1)[227457]: 
192.168.88.1,server1.bpk2.com,-,28/Jul/2023:13:09:26 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","cache_object://server2.bpk2.com/","cachemgr.cgi/4.14",200,3817,-,"TCP_MISS/HIER_NONE","text/plain"
Jul 28 13:08:44 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:13:08:44 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";
Jul 28 13:04:46 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:13:04:46 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";
Jul 28 13:03:30 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:13:03:30 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";
Jul 28 13:01:02 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:13:01:02 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";
Jul 28 12:59:15 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:12:59:15 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";
Jul 28 12:59:11 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:12:59:11 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";
Jul 28 12:58:27 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:12:58:27 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";
Jul 28 12:58:14 server2 (squid-1)[227457]: 
192.168.88.2,server2.bpk2.com,-,28/Jul/2023:12:58:14 
-0400,192.168.88.2,3128,-,"squid",GET,"HTTP/1.0","http://proxy2.bpk2.com:3128/squid-internal-mgr/","cachemgr.cgi/6.1",404,372,-,"TCP_MISS/HIER_NONE","text/html";


On 7/29/23 4:01 PM, Alex Rousskov wrote:

On 7/29/23 12:31, Brendan Kearney wrote:

i am not following.


Sorry, I was just gathering evidence and explaining what you saw. I 
have not confirmed a bug and have not been offering a solution (yet?).



squid 4.14 on fedora 32 does not have the file, nor does it exhibit 
the issue.


squid 6.1 on fedora 38 does not have the file, but does exhibit the 
issue.


... and you do not have a MGR_INDEX file and, presumably, did not have 
that file before. That piece of information was missing. Now we have 
it. Let's call that progress, even though it may not look like one :-).


One additional checkbox to tick is to make sure that the cachemgr.cgi 
script you are using comes from the Squid version you are testing.



what am i missing, and is there a way to provide this functionality 
in 6.1?  if an external tool, or different package, is needed what is 
that?


cachemgr.cgi is not my area of expertise, but I believe that, bugs 
notwithstanding, the functionality you want should be available 
without an external tool.


The next step, AFAICT, is for you to detail:

* what HTTP response cachemgr.cgi script gets from Squid and
* what HTTP response your browser gets from cachemgr.cgi

According to [2], Squid should send a 404 response to cachemgr.cgi. 
You may be able to find some Squid response details (e.g., status 
code) in Squid access.log. If your cachemgr.cgi is sending plain text 
requests to Squid, you can also capture HTTP traffic to and from 
cachemgr.cgi using tcpdump, wire

Re: [squid-users] cachemgr.cgi & Internal Error: Missing Template MGR_INDEX

2023-07-29 Thread Brendan Kearney

i am not following.

squid 4.14 on fedora 32 does not have the file, nor does it exhibit the 
issue.

squid 6.1 on fedora 38 does not have the file, but does exhibit the issue.

what am i missing, and is there a way to provide this functionality in 
6.1?  if an external tool, or different package, is needed what is that?


thanks,

brendan

On 7/29/23 12:22 PM, Alex Rousskov wrote:

On 7/29/23 11:07, Brendan Kearney wrote:

the package installed does not have any file named MGR_INDEX. running 
"rpm -ql squid |grep -i index" does not return anything. searching in 
/usr/share/squid for the file does not find it, either.  funny that 
neither the old version of squid, nor the new version of squid have 
that file at all.


Yes, the lack of MGR_INDEX file in Squid sources is "by design" of 
that MGR_INDEX feature -- an "external tool" is supposed to provide 
that file in certain cases[1]. Please do not misinterpret my statement 
as defense of the corresponding design decisions or their side 
effects; I am just stating the facts rather than trying to justify bad 
user experience.


[1] https://github.com/squid-cache/squid/pull/1176#discussion_r1010534845


Alex.



@amos,

i ran firefox with developer tools open, and browsed to the cachemgr 
URL, and reproduced the issue.  the traffic is not being proxied 
through squid, and is making the requests directly.  i am not sure if 
that is what you mean.  i saved the session as a HAR file, if that 
helps.


thank you,

brendan

On 7/29/23 1:26 AM, Amos Jeffries wrote:

On 29/07/23 14:42, Alex Rousskov wrote:

On 7/28/23 20:08, Brendan Kearney wrote:

i am running squid 6.1 on fedora 38, and cannot get the 
cachemgr.cgi working on this box.  I am getting the error:


Internal Error: Missing Template MGR_INDEX

when i try to connect using the cache manager interface.




That is the expected output when you are trying to access the 
manager interface directly from Squid. **Instead** of via the 
cachemgr.cgi.


If you want to try the new manager interface I have a prototype 
javascript tool available at <https://github.com/yadij/cachemgr.js/>.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cachemgr.cgi & Internal Error: Missing Template MGR_INDEX

2023-07-29 Thread Brendan Kearney

@alex,

the package installed does not have any file named MGR_INDEX. running 
"rpm -ql squid |grep -i index" does not return anything. searching in 
/usr/share/squid for the file does not find it, either.  funny that 
neither the old version of squid, nor the new version of squid have that 
file at all.


@amos,

i ran firefox with developer tools open, and browsed to the cachemgr 
URL, and reproduced the issue.  the traffic is not being proxied through 
squid, and is making the requests directly.  i am not sure if that is 
what you mean.  i saved the session as a HAR file, if that helps.


thank you,

brendan

On 7/29/23 1:26 AM, Amos Jeffries wrote:

On 29/07/23 14:42, Alex Rousskov wrote:

On 7/28/23 20:08, Brendan Kearney wrote:

i am running squid 6.1 on fedora 38, and cannot get the cachemgr.cgi 
working on this box.  I am getting the error:


Internal Error: Missing Template MGR_INDEX

when i try to connect using the cache manager interface.




That is the expected output when you are trying to access the manager 
interface directly from Squid. **Instead** of via the cachemgr.cgi.


If you want to try the new manager interface I have a prototype 
javascript tool available at <https://github.com/yadij/cachemgr.js/>.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] cachemgr.cgi & Internal Error: Missing Template MGR_INDEX

2023-07-28 Thread Brendan Kearney

list members,

i am running squid 6.1 on fedora 38, and cannot get the cachemgr.cgi 
working on this box.  I am getting the error:


Internal Error: Missing Template MGR_INDEX

when i try to connect using the cache manager interface.  oddly, when i 
connect from a different host running squid, using the older squid 4.14 
on fedora 32 cachemgr.cgi, i am able to get into the cache manager 
interface.  is there something i am missing?


thanks,

brendan

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Block doc documents

2017-06-27 Thread brendan kearney
You need an ICAP server intelligent enough to differentiate between the
file types.  Squid is a proxy and can only deal with the protocol.  An ICAP
server can deal with the content.  C-icap and ecap are a couple options
that seem to be available.  I havr no experience with either.

On Jun 27, 2017 7:53 AM, "Daniel Rieken"  wrote:

> Hello,
>
> I would like to block my users from downloading doc- and docm-files,
> but not docx.
>
> So this works fine for me:
> /etc/squid3/blockExtensions.acl:
> \.doc(\?.*)?$
> \.docm(\?.*)?$
>
> acl blockExtensions urlpath_regex -i "/etc/squid3/blockExtensions.acl"
> http_access deny blockExtensions
>
>
> But in some cases the URL doesn't contain the extension (e.g. doc).
> For URLs like this the above ACL doesn't work:
> - http://www.example.org/download.pl?file=wordfile
> - http://www.example.org/invoice-5479657415/
>
> Here I need to work with mime-types:
> acl blockMime rep_mime_type application/msword
> acl blockMime rep_mime_type application/vnd.ms-word.
> document.macroEnabled.12
> http_reply_access deny blockMime
>
> This works fine, too. But I see a problem: The mime-type is defined on
> the webserver. So the badguy could configure his webserver to serve a
> doc-file as application/i.am.not.a.docfile and the above ACL isn't
> working anymore.
> Is there any way to make squid block doc- and docm files based on the
> response-headers file-type?
> Or in other words: Is squid able to match the "doc" in the
> Content-Disposition header of the response?
>
> HTTP/1.0 200 OK
> Date: Tue, 27 Jun 2017 11:40:57 GMT
> Server: Apache Phusion_Passenger/4.0.10 mod_bwlimited/1.4
> Cache-Control: no-cache, no-store, max-age=0, must-revalidate
> Pragma: no-cache
> Content-Type: application/baddoc
> Content-Disposition: attachment;
> filename="gescanntes-Dokument-VPPAW-072-JCD3032.doc"
> Content-Transfer-Encoding: binary
> X-Powered-By: PHP/5.3.29
> Connection: close
>
>
> Regards, Daniel
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] microsoft edge and proxy auth not working

2017-03-09 Thread Brendan Kearney
adding this back to the mailing list, for the benefit of those who 
search for it.


i do not have simple and easy to use instructions for mac os x and linux 
participation in AD.  it is not a simple task.  on linux, you will need 
to look into SSSD (Simple Security Services Daemon) and understand that 
process.


i have a mac for work, and it is a domain member object, so i know it 
can be done.  i dont know how it is done, and would think there are 
internet articles that you can search for on the subject.


On 03/09/2017 02:13 PM, Rafael Akchurin wrote:

Hello Brendan,

Yes by default we have NTLM disabled :)

Unfortunately we must keep the proxy solution in parity with DC capabilities in 
AD which luckily still support NTLM authentication through LDAP.

This allows us to relay the tokens without Samba as I described in previous 
mail.

BTW if you could share the ready to use (simple) instructions to have Kerberous 
auth supported ftom Mac/iPhone/iPad and Linux (Ubuntu/CentOS) it would be 
beneficial to all.

Best regards,
Rafael Akchurin


Op 9 mrt. 2017 om 19:47 heeft Brendan Kearney  het volgende 
geschreven:


On 03/09/2017 01:17 PM, Rafael Akchurin wrote:
The thing is, when you got some machines in your network which are not joined 
to the domain (think apple, linux) you still need NTLM support on proxy :(

And having full blown Samba just because of those few is too much of admin's 
hassle - so we had to write NTLM relay that would rebind to domain controller 
with LDAP protocol passing NTLM token back and forth.

Joining Squid proxy to the domain (which is required to authenticate using 
Samba/NTLM) also prevents from successful reverts from vm snapshots after 30 
days and requires rejoin - thus preventing us from creating easily 
provisioned/thrown away scalable web filter / proxy instances (think docker).

Best regards,
Rafael Akchurin


Op 9 mrt. 2017 om 19:09 heeft Mike Surcouf  het volgende 
geschreven:

Ah OK sorry
I am curious why you have a reason to use NTLM over Kerberos? :-)

-Original Message-
From: Rafael Akchurin [mailto:rafael.akchu...@diladele.com]
Sent: 09 March 2017 18:01
To: Mike Surcouf
Cc: Amos Jeffries; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] microsoft edge and proxy auth not working

Hello Mike,

I specifically was debugging our NTLM implementation with Edge :)

Kerberos works just fine, you are correct.

Best regards,
Rafael Akchurin


Op 9 mrt. 2017 om 18:57 heeft Mike Surcouf  het volgende 
geschreven:

Hi Rafael

Is there any reason you can't use Kerberos.
Note you will need to create a keytab but the setup is not that hard and in the 
docs.
I use it very successfully on window AD network.

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
auth_param negotiate children 20
auth_param negotiate keep_alive on

Thanks

Mike

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
On Behalf Of Rafael Akchurin
Sent: 09 March 2017 17:01
To: Amos Jeffries; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] microsoft edge and proxy auth not working

Hello Amos, Markus, all,

Just as a side note - I also suffered  from this error sometime before with 
Edge and our custom NTLM relay to domain controllers (run as auth helper by 
Squid). The strange thing it went away after installing some (unknown) Windows 
update.

I do have the "auth_param ntlm keep_alive off" in the config though.

It all makes me quite suspicious the error was/is in Edge or in my curly hands.

Best regards,
Rafael Akchurin
Diladele B.V.

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
On Behalf Of Amos Jeffries
Sent: Thursday, March 9, 2017 5:12 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] microsoft edge and proxy auth not working

On 8/03/2017 11:28 p.m., Rietzler, Markus (RZF, Aufg 324 /
) wrote:

i should add that we are using squid 3.5.24.


Try with "auth_param ntlm keep_alive off". Recently the browsers have been 
needing that.

Though frankly I am surprised if Edge supports NTLM at all. It was deprecated 
in April 2006 and MS announced removal was being actively pushed in all thier 
software since Win7.


-Ursprüngliche Nachricht-
Von: Rietzler, Markus

we have some windows 10 clients using microsoft edge browser.
access to internet is only allowed for authenticated users. we are
using samba/winbind auth

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5- ntlmssp auth_param ntlm children 64
startup=24 idle=12 auth_param ntlm keep_alive on acl auth_user
proxy_auth REQUIRED

on windows 10 clients with IE11 it is working (with ntlm automatic
auth) on the same machine, with Microsoft edge I get TCP_Denied/407 message.
seems I only get one single TCP_DENIED/407 line in accesslog and an
auth dialog pops up. I have disabled basic auth via ntlm.
shouldn't there be 3 lines for prox

Re: [squid-users] Tunnelling requests using squid-cache

2017-02-09 Thread Brendan Kearney

On 02/08/2017 09:54 PM, Kottur, Abhijit wrote:


Hi Team,

I am writing this email to understand the capabilities of the product 
‘squid-cache’.


Requirement:

I have an executable(.exe) which is trying to hit an internet website. 
This executable has the capability to accept proxy IP and port.


However, our enterprise proxy needs authentication credentials (NTLM 
Authentication) to allow any network through it. The executable that I 
have doesn’t accept credentials.


Thus, I need to forward all requests from the executable to a /local 
proxy/ which should encapsulate the request with the actual proxy 
details (IP, port, username & password) and route the requests to the 
proxy so that it will be allowed to hit internet. Likewise for response.


Please let me know if this can be achieved by ‘squid-cache’ and if 
yes, please provide me with the details as to how this can be 
configured on Win XP/8.1.


Also, I couldn’t find a windows 32 bit installer. Please let me know 
if there is one available.


Thanks in advance J

*_*

Regards,

Abhijit Kottur

Level 15, 255 Pitt Street,

Sydney, NSW  2000

M: +61 414649364

mailto:abhijit.kot...@cba.com.au 

cid:image002.png@01CDCE17.DD5C8450

*_*
/Our vision is to be Australia's finest financial services 
organisation through excelling in customer service/



** IMPORTANT MESSAGE *
This e-mail message is intended only for the addressee(s) and contains 
information which may be

confidential.
If you are not the intended recipient please advise the sender by 
return email, do not use or
disclose the contents, and delete the message and any attachments from 
your system. Unless
specifically indicated, this email does not constitute formal advice 
or commitment by the sender
or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its 
subsidiaries.

We can be contacted through our web site: commbank.com.au.
If you no longer wish to receive commercial electronic messages from 
us, please reply to this

e-mail by typing Unsubscribe in the subject line.
**



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Look into CNTLM.  it is a local proxy that can inject NTLM headers, to 
satisfy proxy authentication.  you would then chain CNTLM to your 
corporate proxies for internet access.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Forward Proxy for LDAP

2016-12-15 Thread Brendan Kearney

On 12/15/2016 04:20 PM, Yuri Voinov wrote:




15.12.2016 20:29, Bryan Peters пишет:

My Google-fu seems to be coming up short.

We have an application that ties into our users SSO/LDAP servers.  
We, don't run an LDAP server of our own, we're just making outbound 
calls to their LDAP servers.


I would like to proxy all outbound LDAP calls through Squid to get 
around some limitations of AWS and our customers need to whitelist an 
IP. (AWS load balancers don't have static IPs, some of our customers 
won't whitelist FQDNs in their firewall).


Getting the traffic from our app server(s) to the Squid box hasn't 
been much of a problem.  I'm using Iptables/NAT to accomplish this.   
TCPdump on the Squid machine sees  traffic coming in on 3128.


I've added 389 as a 'safe port' in the squid config, created ACLs 
that allow the network the traffic is coming in on.  Yet squid never 
grabs the traffic and does anything with it.  The logs don't get 
updated at all.


Am I incorrect about Squid being able to proxy LDAP traffic?

Exactly. By definition, squid is only HTTP proxy. Initially.
Modern versions supports also HTTPS (with restrictions) and FTP (with 
restrictions).


Googling for this is sort of maddening as all forums, mailing lists, 
FAQs and documentation continues to come up for doing LDAP auth on a 
Squid machine, which isn't what I'm looking for at all.

Condolences. Thing you want is not possible by Squid.


Any help you can give would be appreciated.
It can not help the fact that the product is not as a class. Squid - 
no proxy all protocols in the world. Although it would not prevent the 
availability of support for some of them - and it is certainly not FTP 
(FTP - in 2016 the year indeed! :))


Thanks


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Cats - delicious. You just do not know how to cook them.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


if you want to proxy LDAP, why not use LDAP to do it?

http://www.openldap.org/doc/admin23/proxycache.html


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] issues with amazonaws & cloudfront

2016-09-23 Thread Brendan Kearney

On 09/23/2016 10:28 AM, lravelo wrote:

Good morning!

I have four squid 3.3.8 proxies load balanced behind two VIPs (in groups of
two) using least connections load balancing.  I've been having issues with
the .amazonaws.com and .cloudfront.com domains.  We use TCP load balancing
and not HTTP load balancing.  Basically what happens is that these web pages
request a keep-alive and on the browser console I'm seeing messages saying
that proxy authentication failed and some "ERR_CACHE_ACCESS_DENIED 0" errors
as well.  We do have kerberos authentication for SSO.  Not sure if anyone
else has had this issue and what's been done to resolve it.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/issues-with-amazonaws-cloudfront-tp4679665.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
what is the DNS name of the VIP you load balance behind?  does the DNS 
name match the HTTP principal you created in kerberos?  for example:


dns name: proxy.domain.tld
kerberos principal: HTTP/proxy.domain.tld@REALM

the keytabs that you created, they have to be identical for each load 
balanced pool member.  you should have made one keytab, and securely 
copied it to each pool member.  if they are not exactly identical, one 
proxy will work (the one with the latest keytab created, because the 
KVNO will be ordinally greater[use "klist -Kket /path/to/file.keytab]) 
and the other wont work.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AD Ldap (automatically take the user that is logging on PC)

2016-08-17 Thread brendan kearney
You want Kerberos and/or NTLM authentication for Single Sign On.  These
authentication methods automatically provide credentials when browser are
configured and the necessary network services are running.

On Aug 17, 2016 6:30 PM, "erdosain9"  wrote:

> lol
> no, for all the ACL.
> vip and control...
> that no users need to enter username and password ... (only to log on to
> the
> PC, but do not have to put username and password in the browser)..
> for all.
>
> (i dont speak english.)
>
>
>
> --
> View this message in context: http://squid-web-proxy-cache.
> 1019090.n4.nabble.com/AD-Ldap-automatically-take-the-user-
> that-is-logging-on-PC-tp4678994p4678996.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-04 Thread brendan kearney
At what point does buffer bloat set in?  I have a linux router with the
below sysctl tweaks load balancing with haproxy to 2 squid instances.  I
have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
TX to 1024 on all interfaces.

The squid servers run with almost the same hardware and tweaks, except the
ring buffers have only been bumped to 512.

DSL Reports has a speed test page that supposedly finds and quantifies
buffer bloat and my setup does not introduce it, per their tests.

I am only running a home internet connection (50 down x 15 up) but have a
wonderful browsing experience.  I imagine scale of bandwidth might be a
factor, but have no idea where buffer bloat begins to set in.

# Favor low latency over high bandwidth
net.ipv4.tcp_low_latency = 1

# Use the full range of ports.
net.ipv4.ip_local_port_range = 1025 65535

# Maximum number of open files per process; default 1048576
#fs.nr_open = 1000

# Increase system file descriptor limit; default 402289
fs.file-max = 10

# Maximum number of requests queued to a listen socket; default 128
net.core.somaxconn = 1024

# Maximum number of packets backlogged in the kernel; default 1000
#net.core.netdev_max_backlog = 2000
net.core.netdev_max_backlog = 4096

# Maximum number of outstanding syn requests allowed; default 128
#net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_syn_backlog = 16284

# Discourage Linux from swapping idle processes to disk (default = 60)
#vm.swappiness = 10

# Increase Linux autotuning TCP buffer limits
# Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE
# Don't set tcp_mem itself! Let the kernel scale it based on RAM.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Increase Linux autotuning UDP buffer limits
net.ipv4.udp_mem = 4096 87380 16777216

# Make room for more TIME_WAIT sockets due to more clients,
# and allow them to be reused if we run out of sockets
# Also increase the max packet backlog
net.core.netdev_max_backlog = 5
net.ipv4.tcp_max_syn_backlog = 3
net.ipv4.tcp_max_tw_buckets = 200
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10

# Disable TCP slow start on idle connections
net.ipv4.tcp_slow_start_after_idle = 0

On Aug 4, 2016 2:17 AM, "Amos Jeffries"  wrote:

> On 4/08/2016 2:32 a.m., Heiler Bemerguy wrote:
> >
> > I think it doesn't really matter how much squid sets its default buffer.
> > The linux kernel will upscale to the maximum set by the third option.
> > (and the TCP Window Size will follow that)
> >
> > net.ipv4.tcp_wmem = 1024 32768 8388608
> > net.ipv4.tcp_rmem = 1024 32768 8388608
> >
>
> Having large system buffers like that just leads to buffer bloat
> problems. Squid is still the bottleneck if it is sending only 4KB each
> I/O cycle to the client - no matter how much is already received by
> Squid, or stuck in kernel queues waiting to arrive to Squid. The more
> heavily loaded the proxy is the longer each I/O cycle gets as all
> clients get one slice of the cycle to do whatever processing they need
> done.
>
> The buffers limited by HTTP_REQBUF_SZ are not dynamic so its not just a
> minimum. Nathan found a 300% speed increase from a 3x buffer size
> increase. Which is barely noticable (but still present) on small
> responses, but very noticable with large transactions.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem site

2016-07-20 Thread Brendan Kearney

On 07/20/2016 08:24 PM, brendan kearney wrote:


Developer tools is not browser specific.  Both IE and Firefox have 
it.  Not sure about Chrome.


Yes telerik fiddler is what I meant.  There is a free version I use.  
I have not come across an open source equivalent.



On Jul 20, 2016 8:12 PM, "Antony Stone" 
<mailto:antony.st...@squid.open.source.it>> wrote:


On Thursday 21 July 2016 at 01:07:51, brendan kearney wrote:

> I would use developer tools (press f12 in your browser)

That sounds quite browser-specific - thanks for mentioning
previously that
you're using Firefox.

> or maybe run fiddler to dig into the details.

I assume you mean http://www.telerik.com/fiddler ?

Is there anything similar to this available under an Open Source
licence?
http://www.telerik.com/purchase.aspx seems to be a pretty
expensive option.

Thanks,


Antony.

--
"Black holes are where God divided by zero."

 - Steven Wright

   Please reply to
the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
http://lists.squid-cache.org/listinfo/squid-users


see https://www.telerik.com/download/fiddler
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem site

2016-07-20 Thread brendan kearney
Developer tools is not browser specific.  Both IE and Firefox have it.  Not
sure about Chrome.

Yes telerik fiddler is what I meant.  There is a free version I use.  I
have not come across an open source equivalent.

On Jul 20, 2016 8:12 PM, "Antony Stone" 
wrote:

> On Thursday 21 July 2016 at 01:07:51, brendan kearney wrote:
>
> > I would use developer tools (press f12 in your browser)
>
> That sounds quite browser-specific - thanks for mentioning previously that
> you're using Firefox.
>
> > or maybe run fiddler to dig into the details.
>
> I assume you mean http://www.telerik.com/fiddler ?
>
> Is there anything similar to this available under an Open Source licence?
> http://www.telerik.com/purchase.aspx seems to be a pretty expensive
> option.
>
> Thanks,
>
>
> Antony.
>
> --
> "Black holes are where God divided by zero."
>
>  - Steven Wright
>
>Please reply to the
> list;
>  please *don't* CC
> me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem site

2016-07-20 Thread brendan kearney
I would use developer tools (press f12 in your browser) or maybe run
fiddler to dig into the details.

On Jul 20, 2016 6:59 PM, "brendan kearney"  wrote:

> Firefox on android :)
>
> On Jul 20, 2016 6:34 PM, "Antony Stone" 
> wrote:
>
>> On Thursday 21 July 2016 at 00:25:38, brendan kearney wrote:
>>
>> > An error occurred during a connection to e-vista.scsolutionsinc.com.
>> SSL
>> > received a weak ephemeral Diffie-Hellman key in Server Key Exchange
>> > handshake message. Error code: SSL_ERROR_WEAK_SERVER_EPHEMERAL_DH_KEY
>>
>> That looks helpful.
>>
>> How / where did you get that message?
>>
>>
>> Antony.
>>
>> --
>> In the Beginning there was nothing, which exploded.
>>
>>  - Terry Pratchett
>>
>>Please reply to the
>> list;
>>  please *don't*
>> CC me.
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem site

2016-07-20 Thread brendan kearney
Firefox on android :)

On Jul 20, 2016 6:34 PM, "Antony Stone" 
wrote:

> On Thursday 21 July 2016 at 00:25:38, brendan kearney wrote:
>
> > An error occurred during a connection to e-vista.scsolutionsinc.com. SSL
> > received a weak ephemeral Diffie-Hellman key in Server Key Exchange
> > handshake message. Error code: SSL_ERROR_WEAK_SERVER_EPHEMERAL_DH_KEY
>
> That looks helpful.
>
> How / where did you get that message?
>
>
> Antony.
>
> --
> In the Beginning there was nothing, which exploded.
>
>  - Terry Pratchett
>
>Please reply to the
> list;
>  please *don't* CC
> me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem site

2016-07-20 Thread brendan kearney
An error occurred during a connection to e-vista.scsolutionsinc.com. SSL
received a weak ephemeral Diffie-Hellman key in Server Key Exchange
handshake message. Error code: SSL_ERROR_WEAK_SERVER_EPHEMERAL_DH_KEY

On Jul 20, 2016 5:49 PM, "Antony Stone" 
wrote:

On Wednesday 20 July 2016 at 23:38:03, Joseph L. Casale wrote:

> Hi,
> Recently our users can no longer connect

Care to add any detail to "can no longer connect"?

eg:

1. They used to be able to - when did this change?

2. What error message or response do users now see in their browser?

3. What shows up in Squid's access.log when users now attempt to connect to
the URL?

4. What was in access.log when they could previously successfully connect to
this URL?

5. Has squid.conf changed since that date?

> to a vendor url
> https://e-vista.scsolutionsinc.com/evista/jsp/delfour/eVistaStart.jsp
> behind squid.

> We have a few sites that don't work well

Such as?

> when cached and adding this domain to that acl

How have you tried to add an HTTPS domain to an ACL?

> has not helped. We are using version 3.3.8.

Which Operating System (on the Squid box) and which version?

> Any suggestion as to what might help?

Certainly:

 - tell us what browser/s your users are using

 - tell us what Squid configuration you have (squid.conf without comments or
blank lines)

 - tell us what you get in access.log when you visit a problematic URL

 - tell us anything which has changed about your network or Squid setup
since
the users were last able to successfully connect


Regards,


Antony.

--
I conclude that there are two ways of constructing a software design: One
way
is to make it so simple that there are _obviously_ no deficiencies, and the
other way is to make it so complicated that there are no _obvious_
deficiencies.

 - C A R Hoare

   Please reply to the list;
 please *don't* CC
me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Force DNS queries over TCP?

2016-06-30 Thread brendan kearney
Nscd or name server caching daemon may be of help.  I believe you can run
your own bind instqnce and point it at the roots, instead of using your
isp's broken implementation
On Jun 30, 2016 2:21 PM, "Chris Horry"  wrote:

>
>
> On 06/30/2016 13:34, Alex Crow wrote:
> > I'd suggest changing IP as this practice is
> >
> > a) a violation of trust, forcing you to use a potentially compromised
> > resource you have no control over
> > b) a clear violation of net-neutrality
> > c) a violation of standards (as it's probably one of those that instead
> > of returning NXDOMAIN as required sends you to an advertising page.
> > )
>
> Tell me about it.  My ISP and I are having a pitched battle about it
> now.  Unfortunately my options are limited in my current area but at
> least it's not Comcast!
>
> > I'm pretty sure you /can/ configure BIND to work like that. I should
> > imagine you could set up forwarders to TCP-based DNS servers.
> >
> > The other option is to get a DNS server set up on a VPS and tunnel your
> > requests to it via IPSEC.
>
> Sounds like a good idea, time to learn IPSEC!
>
> Thanks,
>
> Chris
>
> --
> Chris Horry
> zer...@gmail.com
> http://www.twitter.com/zerbey
> PGP:638C3E7A
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Transition from Squid to bluecoat ProxySG

2016-04-16 Thread Brendan Kearney

On 04/16/2016 09:39 AM, asad wrote:

Hello,

I'm in the process of helping a friend who works in a bank whose 
management have decided to move from Squid infra to bluecoat PorxySG 
solution.


I want to know what are the pitfalls that must be imagined from 
project management as on technical end.


Few info that I'm allowed to share is
   users are 1000 and bandwidth is 40 Mbps.
   Between BCP and HQ site active passive setup/configuration.

There are good and bad of each tech, but when there is mgt executive 
decision there leaves not much gap of debating what is better. I'm 
wishing anyone from the community who has been involved in such a task 
in past , current or in near future.


Thanks

regards
asad


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
i support blue coat proxies professionally, and run squid proxies 
personally.  while i have not migrated from one to the other, i have 
done data center relocation work where 4 existing sites with proxy 
footprints were consolidated into two new sites.  i had to support 
physically moving 14 production proxies to the new sites without 
interrupting internet access for 30k+ users and myriads of production 
applications that use the proxies to access internet resources.


can you explain what you mean by active / passive config?  i take this 
to mean some sort of failover mechanism is used.  are you using load 
balancing to manage this?  if you are, your cutover can be a lot easier, 
as  you would simply insert the blue coat into the load balanced pool 
and assign traffic to it.


if you are using a proxy script or PAC file, and not load balancing, you 
can assign certain traffic to the blue coat in the PAC.  a lot of 
flexibility can be exercised in a PAC file, since you dictate the logic.


i had the luxury of both load balancing and PAC file in order to manage 
my moves.  i did a little of both the above to manage traffic levels 
during transition periods.  i was able to move the initial devices 
because one site had newer gear with plenty of performance head room.  
then it was a cycle of reassign traffic, move more hardware, lather, 
rinse and repeat...


i would suggest you build and test your new gear extensively before 
starting any cutover.  develop a build process and separate validation 
process to ensure consistency and accuracy of the build. check interface 
settings, routing, dns settings, authentication pieces and any 
particular items in your environment that is considered a show-stopper, 
unacceptable risk or gap in security posture.  have technical personnel 
use the blue coat proxies for a couple days to a week before giving the 
general user audience access to them.  you can get their input about 
issues and performance and tweak things.


recommend that the environment use both load balancing and a PAC file, 
if you can convince the mgmt.  sell them one buying more capable 
hardware than you need, because failover events such as ISP (as opposed 
to WAN) outages could be almost seamless and the additional load of 
users pushed to an alternate device in the load balanced pool wont bring 
down the box.  this assumes that you maintain the active / passive 
config i think you have.  sell them on reliability, stability and high 
availability.


i am interested to hear what decisions are made and how things progress 
for you.  best of luck.


brendan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Identifying intercepted clients

2016-04-04 Thread Brendan Kearney

On 04/03/2016 08:06 PM, Amos Jeffries wrote:

On 4/04/2016 4:22 a.m., Brendan Kearney wrote:

with fedora 24 being released in a couple months, haproxy v1.6.x will be
available, and the ability to easily intercept HTTP traffic will be in
the version (see the set-uri directive).  with v1.6 i will be able to
rewrite the URL, so that squid can process the request properly.

That does not make sense. Intercepting and URL-rewriting are completely
different actions.

The Squid-3.5 and later versions are able to receive PROXY protocol
headers from HAProxy. You may find that much better than fiddling around
with URLs and available in your current HAProxy.
i use iptables to intercept the request, and need the set-uri option in 
haproxy 1.6.x to concatenate the Host header with the GET, in order to 
have the request in the form that squid expects the request.  yes, they 
are separate actions and i should have been clearer.


i will look into the PROXY protocol additions, but that may not be an 
option until i can get all my boxes upgraded.




  my
problem is that i run authenticated access on the proxy, and will need
to exempt the traffic from that restriction.


What restriction?
the authenticated access restriction.  not much of my policy allows for 
unauthenticated access.




what mechanisms can i use to identify the fact that the client traffic
has been intercepted, so that i can create ACLs to match the traffic?  i
don't want to use things like IPs or User-Agent strings, as they may
change or be unknown.

Only the interceptor can do that traffic distinction. Once traffic gets
multiplexed the information is lost.
i tried to create / insert a header at the router/firewall/load 
balancer, and test for the existence of the header in squid, but that 
did not seem to go as well as i thought it might.



i was thinking about sending the intercepted traffic to a different
port, say 3129, and then using localport to identify the traffic. with
an ACL, i would exempt the traffic from auth, etc.  are there better
options?  how are other folks dealing with intercepted and explicit
traffic on the same box?

That would be one fairly good way to distinguish the traffic types. So
why is the URL fiddling happening?
because i need to concatenate the Host header with the GET line (URI), 
in order for squid to be able to process the request.  i dont have squid 
3.5 yet, nor do i have haproxy 1.6 yet, so i have to use the old 
interception methods to accomplish this, at this point.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
thanks for the feedback.  seems i might be able to do things, just have 
to find my way through until newer versions give me better means of 
doing it.


thanks,

brendan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Identifying intercepted clients

2016-04-03 Thread Brendan Kearney
with fedora 24 being released in a couple months, haproxy v1.6.x will be 
available, and the ability to easily intercept HTTP traffic will be in 
the version (see the set-uri directive).  with v1.6 i will be able to 
rewrite the URL, so that squid can process the request properly.  my 
problem is that i run authenticated access on the proxy, and will need 
to exempt the traffic from that restriction.


what mechanisms can i use to identify the fact that the client traffic 
has been intercepted, so that i can create ACLs to match the traffic?  i 
don't want to use things like IPs or User-Agent strings, as they may 
change or be unknown.


i was thinking about sending the intercepted traffic to a different 
port, say 3129, and then using localport to identify the traffic. with 
an ACL, i would exempt the traffic from auth, etc.  are there better 
options?  how are other folks dealing with intercepted and explicit 
traffic on the same box?


thanks,

brendan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] intercepting roku traffic

2016-03-09 Thread Brendan Kearney

On 03/09/2016 06:18 AM, Amos Jeffries wrote:

On 9/03/2016 4:59 a.m., Brendan Kearney wrote:

i have a roku4 device and it constantly has issues causing it to
buffer.  i want to try intercepting the traffic to see if i can smooth
out the rough spots.

Squid is unlikely to help with this issue.

"Buffering ..." issues are usually caused by:

- broken algorithms on the device consuming data faster than it lets the
remote endpoint be aware it can process, and/or
- network level congestion, and/or
- latency increase from excessive buffer sizes (on device, or network).



  i can install squid on the router device i have
and intercept the port 80/443 traffic, but i want to push the traffic to
my load balanced VIP so the "real" proxies can do the fulfillment work.

Each level of software you have processing this traffic increases the
latency delays packets have. Setups like this also add extra bottlenecks
which can get congested.

Notice how both of those things are items on the problem list. So adding
a proxy is one of the worst things you can do in this situation.

On the other hand, it *might* help if the problem is lack of a cache
near the client(s). You need to know that a cache will help though
before starting.


My advice is to read up on "buffer bloat". What the term means and how
to remove it from your network. Check that you have ICMP and ICMPv6
working on your network to handle device level issues and congestion
handling activities.

Then if the problem remains, check your traffic to see how much is
cacheable. Squid intercepts can usually cache 5%-20% of any network
traffic if there is no other caching already being done on that traffic
(excluding browser caches). With attention and tuning it can reach
soewhere around 50% under certain conditions.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

a bit about my router and network:

router - hp n36l microserver, 1.3 GHz Athlon II Neo CPU, 4 GB RAM, on 
board Gb NIC for WAN, HP nc364t 4x1Gb NIC using e1000e driver. the 4 
ports on the nc364t card are bonded with 802.3ad and LACP and 9 VLANs 
are trunked across.


switch 1 - cisco sg500-52
switch 2 - cisco sg300-28

router is connected to switch 1 with a 4 port bond and switch 1 is 
connected to switch 2 with a 4 port bond.  all network drops throughout 
the house are terminated to a patch panel and patched into the sg500.  
all servers are connected to the sg300, and have a 4 port bond for 
in-band connections and an IPMI card for out of band mgmt.


the router does firewall, internet gateway/NAT, load balancing, and 
routing (locally connected only, no dynamic routing such as ospf via 
quagga).


now, what i have done so far:

when if first got the roku4, i found issues with the sling tv app. hulu 
worked without issue, and continues without issue even now.  i have 
looked into QoS, firewall performance tweaks, ring buffer increases, and 
kernel tuning for things like packets per second capacity.  i also have 
roku SE devices, that have no issues in hulu or sling tv at all.  having 
put up a vm for munin monitoring, i am able to see some details about 
the network.


QoS will not be of any value because none of the links i control are 
saturated or congested.  everything is gig, except for the roku 
devices.  the 4 is 100 Mb and the SE's are wifi.  the only way for me to 
have QoS kick in is to artificially congest my links, say with very few 
ring buffers.  i dont see this as a reasonable option at this point.


i have tuned my firewall policy in several ways.  first, i stopped 
logging the roku HTTP/HTTPS traffic.  very chatty sessions lead to lots 
of logs.  each log event calls the "logger" binary, and i was paying 
penalties for starting a new process thousands of times to log the 
access events.  i also reject all other traffic from the roku's instead 
of dropping the traffic.  this helps with the google dns lookups the 
devices try, and i no longer pay the dns timeout penalties for that.  i 
have also stopped the systemd logging and i am not paying the i/o 
penalty for writing those logs to disk.  since i use rsyslog with RELP 
(Reliable Event Log Processing), all logging still goes on, i just have 
reliable syslog over tcp with receipt acknowledgment, and cascading FIFO 
queue to memory and then to disk if need be.  i believe this has helped 
reclaim i/o, interrupts and contexts, leading to some (minor) 
performance gains.


the hp nc364t quad gig nic's are bonded, and i see RX and TX errors on 
the bond interface (not the VLAN sub interfaces and not the physical 
p1pX interfaces).  i increased the ring buffers on all 4 interfaces from 
the default of 256 to 512 and then to 1024 and then to 2048, testing 
each change along the way.  1024 seems to the best so far and i dont 
think there are any issues with buffer bloat.  

[squid-users] intercepting roku traffic

2016-03-08 Thread Brendan Kearney
i have a roku4 device and it constantly has issues causing it to 
buffer.  i want to try intercepting the traffic to see if i can smooth 
out the rough spots.  i can install squid on the router device i have 
and intercept the port 80/443 traffic, but i want to push the traffic to 
my load balanced VIP so the "real" proxies can do the fulfillment work.  
i see these steps as being needed:


setup intercept instance on router device for ports 80 and 443
http_port 80 intercept
http_port 443 intercept

configure cache_peer with the VIP as a parent
cache_peer 192.168.120.1parent80800default

with the above, i think i would get the intercepted traffic to my 
proxies, via the load balanced VIP, and be able to proxy the traffic.  
are there any glaring holes in my logic?  any help is appreciated.


thank you,

brendan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with NTLM authentication

2015-11-24 Thread Brendan Kearney

On 11/24/2015 10:08 AM, Verónica Ovando wrote:

My Squid Version:  Squid 3.4.8

OS Version:  Debian 8

I have installed Squid on a server using Debian 8 and seem to have the 
basics operating, at least when I start the squid service, I have am 
no longer getting any error messages.  At this time, the goal is to 
authenticate users from Active Directory and log the user and the 
websites they are accessing.


I followed the official guide 
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm. I 
verified that samba is properly configured, as the guide suggest, with 
the basic helper in this way:


# /usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-basic
domain\user pass
OK

Here is a part of my squid.conf where I defined my ACLs for the groups 
in AD:


 

auth_param ntlm program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp --domain=DOMAIN.com

auth_param ntlm children 30

auth_param basic program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 5
auth_param basic realm Servidor proxy-cache de mi Dominio
auth_param basic credentialsttl 2 hours

external_acl_type AD_Grupos ttl=10 children=10 %LOGIN 
/usr/lib/squid3/ext_wbinfo_group_acl -d


acl AD_Standard external Grupos_AD Standard
acl AD_Exceptuados external Grupos_AD Exceptuados
acl AD_Bloqueados external Grupos_AD Bloqueados

acl face url_regex -i "/etc/squid3/facebook"
acl gob url_regex -i "/etc/squid3/gubernamentales"

http_access allow AD_Standard
http_access allow AD_Exceptuados !face !gob
http_access deny AD_Bloqueados
 



I tested using only the basic scheme (I commented the lines out for 
NTLM auth) and every time I open the browser it asks me my user and 
pass. And it works well because I can see in the access.log my 
username and all the access policies defined are correctly applied.


But if I use NTLM auth, the browser still shows me the pop-up (it must 
no be shown) and if I enter my user and pass it still asks me for them 
until I cancel it.


My access.log, in that case, shows a TCP_DENIED/407 as expected.

What could be the problem? It suppose that both Kerberos and NTLM 
protocols work together, I mean that can live together in the same 
environment and Kerberos is used by default. How can I check that NTLM 
is really working? Could it be a squid problem in the conf? Or maybe 
AD is not allowing NTLM traffic?


Sorry for my English. Thanks in advance.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
make sure Internet Explorer is set to use Integrated Windows 
Authentication (IWA).  Tools --> Internet Options --> Advanced --> 
Security --> Enable Integrated Windows Authentication.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] intercepting traffic

2015-11-19 Thread Brendan Kearney

On 11/18/2015 10:42 PM, Amos Jeffries wrote:

On 19/11/2015 3:08 p.m., Brendan Kearney wrote:

I am trying to set up a transparent, intercepting squid instance, along
side my existing explicit instance, and would like some input around
what i have buggered up so far.

i am running HAProxy in front of two squid instances, with the XFF
header added by HAProxy.  My squid configs are all set to follow the XFF
for the real source and logging is setup around digesting XFF for the
source.

i took my config and added:
http_port 192.168.88.1:3129 intercept

This tells Squid you are intercepting the traffic between HAProxy and Squid.

You describe HAProxy as explicitly sending traffic to the Squid, so
there is no need for interception into Squid.


this tells me that i am getting to the squid instances via the load
balancer, but i am running into the "NAT must occur on the squid box"
rule, i think.

Yes. That rule and the intercept option that cause it does not apply
when the software sending traffic to Squid is explicitly configured.
Such as you describe HAProxy being.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
when i put in just the DNAT that sends the traffic to the proxy VIP and 
load balances the requests to the squid instances on port 3128 (not the 
intercept port), i issue a curl command:


curl -vvv --noproxy squid-cache.org http://squid-cache.org/

and get an error page saying:

...
The following error was encountered while trying to retrieve the URL: 
/



Invalid URL


Some aspect of the requested URL is incorrect.

Some possible problems are:

Missing or incorrect access protocol (should be http:// or 
similar)

Missing hostname
Illegal double-escape in the URL-Path
Illegal character in hostname; underscores are not allowed.


is the DNAT stripping header info, such as the Host header, or am i 
still missing something?


thanks,

brendan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] intercepting traffic

2015-11-19 Thread brendan kearney
So does that mean I can run the DNAT on the firewall/router/load balancer
device and remove the intercept line from my configs, and expect things to
work?
On Nov 18, 2015 10:43 PM, "Amos Jeffries"  wrote:

> On 19/11/2015 3:08 p.m., Brendan Kearney wrote:
> > I am trying to set up a transparent, intercepting squid instance, along
> > side my existing explicit instance, and would like some input around
> > what i have buggered up so far.
> >
> > i am running HAProxy in front of two squid instances, with the XFF
> > header added by HAProxy.  My squid configs are all set to follow the XFF
> > for the real source and logging is setup around digesting XFF for the
> > source.
> >
> > i took my config and added:
> > http_port 192.168.88.1:3129 intercept
>
> This tells Squid you are intercepting the traffic between HAProxy and
> Squid.
>
> You describe HAProxy as explicitly sending traffic to the Squid, so
> there is no need for interception into Squid.
>
> >
> > this tells me that i am getting to the squid instances via the load
> > balancer, but i am running into the "NAT must occur on the squid box"
> > rule, i think.
>
> Yes. That rule and the intercept option that cause it does not apply
> when the software sending traffic to Squid is explicitly configured.
> Such as you describe HAProxy being.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] intercepting traffic

2015-11-18 Thread Brendan Kearney
I am trying to set up a transparent, intercepting squid instance, along 
side my existing explicit instance, and would like some input around 
what i have buggered up so far.


i am running HAProxy in front of two squid instances, with the XFF 
header added by HAProxy.  My squid configs are all set to follow the XFF 
for the real source and logging is setup around digesting XFF for the 
source.


i took my config and added:
http_port 192.168.88.1:3129 intercept

on the router/firewall/load balancer device that is running HAProxy, i 
added a NAT rule as described here:

http://www.fwbuilder.org/4.0/docs/users_guide5/redirection_rules.shtml

in my cache.log i get:
2015/11/18 20:45:13 kid1|  NF getsockopt(SO_ORIGINAL_DST) failed on 
local=192.168.88.1:3129 remote=192.168.88.254:37102 FD 20 flags=33: (92) 
Protocol not available
2015/11/18 20:49:05 kid1|  NF getsockopt(SO_ORIGINAL_DST) failed on 
local=192.168.88.1:3129 remote=192.168.88.254:37381 FD 20 flags=33: (92) 
Protocol not available


this tells me that i am getting to the squid instances via the load 
balancer, but i am running into the "NAT must occur on the squid box" 
rule, i think.


i want to intercept http traffic, and load balance the traffic to my 
squid instances.  this link:


http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute

seems to be a step in the right direction, but i am at a loss on how to 
apply the logic to my environment.  my proxies are on a separate vlan, 
behind a load balancer, not in a DMZ.  i am missing something and not 
sure exactly what it is.  any input on where i need to go?


thanks,

brendan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Multicast WCCPv2 + Squid 3.3.8

2015-11-11 Thread brendan kearney
I am interested in this topic.  Would love to hear about your progress.

The os that squid runs on must participate in a dynamic routing protocol
such as ospf and needs to advertise a route to the multicast ip via itself.

Generally this is done by adding a virtual interface to the loopback and
giving that interface the multicast ip.  When the squid service is running
the os should advertise the route to the multicast ip on its loopback.
When the squid service is stopped the os should remove the route.

There are a couple of timing and interaction pieces you need to account
for, and manage outside of squid.

www.linuxjournal.com/article/3041
On Nov 10, 2015 11:26 PM, "Fatah Mumtaz"  wrote:

> Hi everyone,
> Currently i'm building lab for my thesis on the topic Multicast WCCPv2
> with Squid. I'm trying to config WCCPv2 to work with single proxy server
> (Squid 3.3.8) and multiple Cisco 2821 routers. WCCPv2 works well with one
> proxy server and one router configuration. It's been 2 months since I'm
> trying to implement multicast WCCPv2 and actually I don't know how to
> config Squid to be able to communicate with multiple routers using
> multicast to announce itself presence. Because I've read the documentation
> from Cisco and I've concluded into something like this "the routers are
> somehow the "clients" but not by sending IGMP messages, just by listening
> for multicast packets send by the "sources", the cache engines, on a
> specific multicast group address. " . So the proxy server (or Squid)
> acted as the multicast server that sends multicast packets. Been look over
> the net and still have no clue.
>
> And my question is simple :
> 1. Is it possible to config squid to announce itself presence to the
> routers using multicast? And if it is possible, please kindly provide any
> detail.
>
>
> I also attached the topology i'm working on and please tell me if you need
> any further info.
>
>
>
> Thank You
> Fatah Mumtaz
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Monitoring Squid using SNMP.

2015-10-20 Thread Brendan Kearney

On 10/20/2015 02:26 PM, sebastien.boulia...@cpu.ca wrote:


Hi,

I would like to monitor Squid with Centreon using SNMP.

I configured Squid using http://wiki.squid-cache.org/Features/Snmp

## SNMP Configuration

acl snmpcpu snmp_community cpuread

snmp_port 3401

snmp_access allow snmpcpu localnet

snmp_access deny all

netstat -ano | grep 3401

udp6 0  0 :::3401 :::*off (0.00/0/0)

BUT

When I try to do a snmpwalk, I got a timeout.

[root@bak ~]# snmpwalk xx:3401 -c cpuread -v 1

[root@bak ~]#

Anyone monitor Squid using SNMP ? Do you experiment some issues ?

Thanks for your answers community!

Sébastien Boulianne



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

this did not work - snmpwalk -v2c -c SecretHandShake proxy1:3401
this did work - snmpwalk -v2c -c SecretHandShake proxy1:3401 .1.3

not sure why you would need to "prime" the OID, but...
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] LDAP related question.

2015-07-31 Thread Brendan Kearney

On 07/31/2015 08:34 AM, Dan Purgert wrote:

Quoting Eliezer Croitoru :


I managed to make it work!
I am using ubuntu 14.04.2 with openLDAP and phpldapadmin.
I have changed my server to look like yours and it still didn't work.
So what I did was this: I changed the command to:
/usr/lib/squid3/ext_ldap_group_acl -d -b "dc=ngtech,dc=local" -D 
"cn=admin,dc=ngtech,dc=local" -w password-f 
"(&(objectClass=*)(memberUid=%u)(cn=%g))" -h 127.0.0.1


Which actually works great.
I enter:"user1 parents" and it says OK.

I have been reading that there might be a reason that memberOf will 
not work as expected and was hoping someone here might know about it.





Oh right, I had to compile in(?) something to make "memberOf" play 
nice.  Don't remember if it was in slapd or squid though... would need 
to grab my setup notes from that server to see.


Glad to hear you got it working though!



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

since you have phpLDAPAdmin, my exports should be a near 1:1 import for you.

load the module:

dn: cn=module{2},cn=config #<-- adjust the number between { and } to 
your env

cn: module{2}  # <-- same adjustment as above
objectclass: olcModuleList
objectclass: top
olcmoduleload: {0}memberof.la  # <-- this is 0 because its the first 
module loaded in this cn
olcmodulepath: /usr/lib64/openldap #<-- adjust for your env, this where 
fedora places the *.la files; memberof.la should be in this dir


load the overlay into the database (not the DIT):

dn: olcOverlay={2}memberof,olcDatabase={2}mdb,cn=config  #<-- again 
adjust for your env  it is coincidence that both #s are 2 in my env.

objectclass: olcOverlayConfig
objectclass: olcMemberOf
objectclass: top
olcmemberofrefint: TRUE
olcoverlay: {2}memberof  # <-- adjust for your env, too

i will send screenshots from my phpLDAPAdmin to you off list












___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] LDAP related question.

2015-07-31 Thread brendan kearney
Not near my gear and notes, but will get you what I have later.
On Jul 31, 2015 10:31 AM, "Eliezer Croitoru"  wrote:

> On 31/07/2015 15:37, brendan kearney wrote:
>
>> Pretty sure memberOf is an overlay you have to enable in openldap
>>
>
> I have tried to use this:
>
> http://www.schenkels.nl/2013/03/how-to-setup-openldap-with-memberof-overlay-ubuntu-12-04/
>
> But it doesn't mention that you need to put the file in the scheme
> settings directory and also openLDAP would not survive a restart so I am
> open to understand how to do it.
>
> Thanks,
> Eliezer
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] LDAP related question.

2015-07-31 Thread brendan kearney
Pretty sure memberOf is an overlay you have to enable in openldap
On Jul 31, 2015 8:34 AM, "Dan Purgert"  wrote:

Quoting Eliezer Croitoru :

I managed to make it work!
> I am using ubuntu 14.04.2 with openLDAP and phpldapadmin.
> I have changed my server to look like yours and it still didn't work.
> So what I did was this: I changed the command to:
> /usr/lib/squid3/ext_ldap_group_acl -d -b "dc=ngtech,dc=local" -D
> "cn=admin,dc=ngtech,dc=local" -w password-f
> "(&(objectClass=*)(memberUid=%u)(cn=%g))" -h 127.0.0.1
>
> Which actually works great.
> I enter:"user1 parents" and it says OK.
>
> I have been reading that there might be a reason that memberOf will not
> work as expected and was hoping someone here might know about it.
>
>

Oh right, I had to compile in(?) something to make "memberOf" play nice.
Don't remember if it was in slapd or squid though... would need to grab my
setup notes from that server to see.

Glad to hear you got it working though!


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] bypass proxy

2015-06-17 Thread brendan kearney
Look into the pacparser project on github.  It allows you to evaluate a pac
file and test the logic.
Hi All,

I have 2 issues

First one: How can i bypass proxy for an IP in LAN.


Second one:
I am running squid on openwrt and i want to allow some websites to bypass
proxy and want to allow them go direct.
For that i am using wpad with PAC file but the problem is for some websites
it works and for some it doesn't.

Here is my PAC file



function FindProxyForURL(url, host)
{
// The 1st if function tests if the URI should be by-passed
// Proxy By-Pass List
if (
// ignore RFC 1918 internal addreses
isInNet(host, "10.0.0.0", "255.0.0.0") ||
isInNet(host, "172.16.0.0", "255.240.0.0") ||
isInNet(host, "192.168.0.0", "255.255.0.0") ||

// is url is like http://server by-pass
isPlainHostName(host) ||

// localhost!!
localHostOrDomainIs(host, "127.0.0.1") ||

// by-pass internal URLS
dnsDomainIs(host, ".flipkart.com") ||
dnsDomainIs(host, ".apple.com") ||
dnsDomainIs(host, ".linuxbite.com") ||
dnsDomainIs(host, ".rediff.com") ||

// by-pass FTP
//shExpMatch(url, "ftp:*")
url.substring(0, 4)=="ftp:"
)

// If True, tell the browser to go direct
return "DIRECT";

// If False, it's not on the by-pass then Proxy the request if you
fail to connect to the proxy, try direct.

return "PROXY 192.168.1.1:3128";
//return "DIRECT";
}



To be precise it works for apple.com but doesn't work for rest of the
websites.
Please enlighten me.

-- 
Regards,
Yashvinder

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid doesn't write logs via rsyslog

2015-06-08 Thread Brendan Kearney

On 06/08/2015 06:46 PM, Amos Jeffries wrote:

On 8/06/2015 11:02 p.m., Antony Stone wrote:

On Monday 08 June 2015 at 12:53:00 (EU time), Robert Lasota wrote:


the problem is it still writes logs to files /var/log/access.log or
/opt/var/log/access.log (depends what I set in conf) but never to rsyslog.

I mean, I have set rsyslog to it send logs to remote central server, and
from other apps like sshd or named its working and rsyslog send them , but
Squid still not care that and writes locally to files.

I set different combinations in squid.conf but nothing, even:
access_log syslog squid
cache_log syslog squid.
..also nothing

You appear to be missing the facility and priority settings (ie: telling
syslogd how to handle the messages).

See http://www.squid-cache.org/Doc/config/access_log/

Try something such as:

access_log syslog:daemon.info


Also, cache.log is the unified stderr output of all Squid sub-processes
(workers, diskers, helpers etc). It cannot use syslog at this time.

You can possibly make cache.log file point at a unix socket device that
pipes somewhere like syslog though.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

to stop rsyslog from writing something, i use:

if $programname startswith 'NetworkManager' then -/dev/null
&~

all messages from NetworkManager are written out to /dev/null in 
asynchronous fashion (does not wait for confirmation of the write action 
succeeding, or fire-and-forget mode).  the &~ is a hard stop action so 
all processing of rules stops if the criteria are met.


you would probably want something like that, but will have to play 
around with it, to make it do what you want.


by the by, are you using plain rsyslog forwarding ala:

*.* @@remote-host:514

i am using RELP (Reliable Event Log Processing) to forward all logs from 
all my boxes to a central device where they are loaded into mariadb.  
the relp module creates a "store-and-forward" fifo queue that can 
overcome network outages (length of outage handled is dictated by queue 
size), and also uses TCP for reliability.  there are modules for 
encryption, authentication, etc for relp, too. there is also phplogcon, 
which i use to review the logs in the database.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-availability and load-balancing between N squid servers

2015-06-08 Thread Brendan Kearney

On 06/08/2015 04:23 PM, Rafael Akchurin wrote:

Hello all,

What is the recommended approach to perform load balancing and high 
availability between N squid servers?
I have the following list of requirements to fullfil:

1) Manage N squid servers that share cache (as far as i understand is done 
using cache_peer). Desirable.
2) Availability: if any of the N servers fails the clients are redirected to 
the rest N-1. Prefferable.
3) Scalability: The load is distributed (round-roubin or some other algorithm) 
between N servers, if a new server is added (N + 1) new clients will be able to 
use it reducing the load on the rest N. Prefferable.
4) I need to be able to identify client IP addresses on the squid side and/or 
perform Squid authentication. The client IP and User Name are later to be 
passed to the ICAP server to scan the HTTP(S) request/response contents using 
icap_send_client_ip and icap_send_client_username. Very important requirement.
5) I need to support both HTTP and HTTPS connections with support of selective 
SSL Bump. I.e. for some web sites I do not want to look inside SSL so that 
original site's certificates are used for encryption. Very important 
requirement too.

I know that strictly for HTTP I could use HAProxy with Forward-For or something 
similar, but the 5th requirement is very important and I could not find a way 
to handle SSL with HAProxy properly.

The only idea that comes to my mind is to use some form of round-robin load 
balancing on the level of DNS, but it has its own drawbacks (should be able to 
check availability of my N servers + not real balancing, more like 
distribution).
Any help/thoughts are appreciated.

Thank you!

Best regards,
Rafael Akchurin
Diladele B.V.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

1 - you are likely going to want to create a config for peers:
acl peers src 192.168.1.1/32
acl peers src 192.168.1.2/32
...
http_access allow peers
...
cache_peer 192.168.1.1 sibling 3128 4827 htcp=no-clr
cache_peer 192.168.1.2 sibling 3128 4827 htcp=no-clr
...

remember to make sure all peers are properly represented

2 - use a load balancing device or software, and this is part-and-parcel 
to that functionality


3 - load balancers give you several options.  i use "least connections" 
which is a bit more intelligent that straight round-robin.


4 - depending on how you setup your load balancer, you might be able to 
get the client IP without playing games.  if you cant see the client IP 
as the source of the connection, you will have to work with the 
"X-Forwarded-For" header, like so:

...
follow_x_forwarded_for allow svc_chk
follow_x_forwarded_for deny all
...*
*acl_uses_indirect_client on*
*...*
*log_uses_indirect_client on*
*...

Also, your auth methods may have nuances that need to be accounted for 
(Kerberos and load balancing requires some extra steps).


 5 - HAProxy will work for HTTP and HTTPS.  remember, your clients 
arent talking to HAProxy for HTTP proxying.  the port you load balance 
on with HAProxy is not the HTTP proxy process.  All HAProxy has to do is 
hand the connection off to Squid which will handle the HTTP or HTTPS or 
HTTPS-with-SSL-Bump, independent of anything HAProxy sees or cares about.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sharing a tidbit

2015-04-28 Thread Brendan Kearney
i have 2 squid instances behind HAProxy, balanced using leastconn.  each 
proxy server has a NFS mount under /etc/squid/acls/ where external acls 
are kept.  because the NFS mount is common to both instances, i only 
need to make an update in one place and both proxies will get the 
update.  when i put this together, i wanted a means of reconfiguring 
squid in some sort of automated fashion, based on if the acl files (or 
their contents) were changed.


below is the script i came up with for that.  i call the script from 
root's crontab once every 5 minutes (mind the wrap):


*/5 * * * * /root/bin/SquidReconfigure #reconfigure squid if ACL files 
have been updated


the script will create a temp file and write the time of last 
modification in seconds since Epoch to the temp file for tracking.  if 
the value changes, the temp file is updated and a flag is set to 
indicate that a reconfigure is warranted.  when the reconfigure is 
performed, it logs via logger/syslog that a refresh was performed.


the logic is tested and running on my boxes and works nicely for my 
needs.  because i am a small environment and can deal with the fact the 
proxies are performing these actions at the same time, i don't need to 
stagger the offset for each server.  if your reconfigure action takes a 
long time, you may want to consider what options you have in order to 
continue providing functionally available services.


#!/bin/bash

aclDir=/etc/squid/acl
statFile=/tmp/squidStats
reconfigure=0

for aclFile in $(ls $aclDir)
do
previous=$(grep ^$aclFile\  $statFile |awk '{print $2}')
current=$(stat -t $aclDir/$aclFile -c %Y)

if [ $current != $previous ]
then
#echo -e $aclFile' \t'"change found"
# mind the wrap on the below line
sed -i -e "s/$aclFile\ $previous/$aclFile\ $current/" $statFile
#echo -e $aclFile' \t'"settting marker"
reconfigure=1
fi
done

if [ $reconfigure = 1 ]
then
#echo "reconfiguring squid"
squid -k reconfigure
logger -t '(squid-1)' -p 'local4.notice' Squid ACL Refresh
fi
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NTLM authentication problems with HTTP 1.1

2015-04-08 Thread brendan kearney
Note the lack of a user-agent string.  This is likely an app that cannot
authenticate.

My standard for Auth Bypass is source IP, user-agent string and destination
URL.  Generally the source is preferred to be statically assigned otherwise
you need to allow the entire dhcp pool or range.  Because there is no
user-agent you can drop the requirement or force it with some sort of
negated logic (!any)
On Apr 8, 2015 11:21 AM, "Samuel Anderson"  wrote:

> Hello all,
>
>
> I'm having a problem where HTTP 1.1 connect requests do not authenticate
> using NTLM. Browsing the internet works fine in all major browsers, I
> mostly see this occurring in programs that are installed locally on a users
> computer. Using wireshark I'm able to follow the TCP stream and I can see
> that the server returns the error (407 Proxy Authentication Required). I am
> able to work around this problem by explicitly bypassing a domain from
> requiring authentication, however I really don't want to do that. Any ideas
> would be appreciated very much.
>
> Thanks,
>
>
> Below is the content summery of some of the network packets that I'm
> working with along with my config file
>
> TCP Stream Content
>
> 
> CONNECT batch.internetpostage.com:443 HTTP/1.1
> Host: batch.internetpostage.com
> Proxy-Connection: Keep-Alive
>
>
> HTTP/1.1 407 Proxy Authentication Required
> Server: squid/3.3.8
> Mime-Version: 1.0
> Date: Tue, 07 Apr 2015 21:02:24 GMT
> Content-Type: text/html
> Content-Length: 3208
> X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
> Proxy-Authenticate: Negotiate
> Proxy-Authenticate: NTLM
> X-Cache: MISS from squid2..local
> X-Cache-Lookup: NONE from squid2..local:3128
> Via: 1.1 squid2..local (squid/3.3.8)
> Connection: close
> 
>
> CONFIG File
>
> 
>
> #Kerberos and NTLM authentication
>
> auth_param negotiate program /usr/local/bin/negotiate_wrapper --ntlm
> /usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp
> --domain=.LOCAL --kerberos /usr/lib/squid3/negotiate_kerberos_auth -d
> -s GSS_C_NO_NAME
> auth_param negotiate children 30
> auth_param negotiate keep_alive off
>
> auth_param ntlm program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-ntlmssp --domain=
> auth_param ntlm children 30
> auth_param ntlm keep_alive off
>
> # AD group membership lookup
>
> external_acl_type ldap_group ttl=60 children-startup=10 children-max=50
> children-idle=2 %LOGIN /usr/lib/squid3/ext_ldap_group_acl -R -K -S -b
> "DC=,DC=local" -D "CN=SQUID,OU= Service Accounts,DC=,DC=local"
> -w "" -f "(&(objectclass=person)
> (sAMAccountname=%v)(memberof=CN=%a,OU=PROXY,ou=ALL  Groups,DC=
> ,DC=local))" -h dc1..local,dc2..local,dc3..local,dc4..local
>
> # auth required
>
> acl auth proxy_auth REQUIRED
> http_access deny !auth all
>
> 
>
> --
> Samuel Anderson  |  Information Technology Administrator  |  International
> Document Services
>
> IDS  |  11629 South 700 East, Suite 200  |  Draper, UT 84020-4607
>
>
> CONFIDENTIALITY NOTICE:
> This e-mail and any attachments are confidential. If you are not an
> intended recipient, please contact the sender to report the error and
> delete all copies of this message from your system.  Any unauthorized
> review, use, disclosure or distribution is prohibited.
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] load balancing and site failover

2015-03-26 Thread Brendan Kearney
On Thu, 2015-03-26 at 13:53 +1300, Amos Jeffries wrote:
> On 26/03/2015 10:26 a.m., Brendan Kearney wrote:
> > On Wed, 2015-03-25 at 15:03 +1300, Amos Jeffries wrote:
> >> On 25/03/2015 9:55 a.m., brendan kearney wrote:
> >>> Was not sure if bugzilla was used for mailing list issues.  If you would
> >>> like me to open one, I will but it looks like the list is working again.
> >>
> >> Bugzilla is used, list bugs under the "project services" product.
> >>
> >>
> >> As for your query...
> >>
> >>> On Mar 24, 2015 2:25 PM, "Brendan Kearney" wrote:
> >>>
> >>>> On Tue, 2015-03-24 at 10:18 -0400, Brendan Kearney wrote:
> >>>>> while load balancing is not a requirement in a proxy environment, it
> >>>>> does afford a great deal of functionality, scaling and fault tolerance
> >>>>> in one.  several if not many on this list probably employ them for their
> >>>>> proxies and likely other technologies, but they are not all created
> >>>>> equal.
> >>>>>
> >>>>> i recently looked to see if a specific feature was in HAProxy.  i was
> >>>>> looking to see if HAProxy could reply to a new connection with a RST
> >>>>> packet if no pool member was available.
> >>>>>
> >>>>> the idea behind this is, if all of the proxies are not passing the
> >>>>> service check and are marked down by the load balancer, the reply of a
> >>>>> RST in the TCP handshake (i.e. SYN -> RST, not SYN -> SYN/ACK -> ACK)
> >>>>> tells the browser to failover to the next proxy assigned by the PAC
> >>>>> file.
> >>>>>
> >>>>> where i work, we have this configuration working.  the load balancers
> >>>>> are configured with the option to send a reset when no proxy is
> >>>>> available in the pool.  the PAC file assigns all 4 of the proxy VIPs in
> >>>>> a specific order based on which proxy VIP is assigned as the primary.
> >>>>> In every case, if the primary VIP does not have an available pool
> >>>>> member, the browser fails over to the next in the list.  failover would
> >>>>> happen again, if the secondary VIP replies with a RST during the
> >>>>> connection establishing.  the process repeats until a TCP connection
> >>>>> establishes or all proxies assigned have been exhausted.  the browser
> >>>>> will use the proxy VIP that it successfully connects to, for the
> >>>>> duration of the session.  once the browser is closed and reopened, the
> >>>>> evaluation of the PAC file occurs again, and the process starts anew.
> >>>>> plug-ins such as Proxy Selector are the exception to this, and can be
> >>>>> used to reevaluate a PAC file by selecting it for use.
> >>>>>
> >>>>> we have used this configuration several times, when we found an ISP link
> >>>>> was flapping or some other issue more global in nature than just the
> >>>>> proxies was affecting our egress and internet access.  i can attest to
> >>>>> the solution as working and elegantly handling site wide failures.
> >>>>>
> >>>>> being that the solutions where i work are proprietary commercial
> >>>>> products, i wanted to find an open source product that does this.  i
> >>>>> have been a long time user of HAProxy, and have recommended it for
> >>>>> others here, but sadly they cannot perform this function.  per their
> >>>>> mailing list, they use the network stack of the OS for connection
> >>>>> establishment and cannot cause a RST to be sent to the client during a
> >>>>> TCP handshake if no pool member is available.
> >>>>>
> >>>>> they suggested an external helper that manipulates IPTables rules based
> >>>>> on a pool member being available.  they do not feel that a feature like
> >>>>> this belongs in a layer 4/7 reverse proxy application.
> >>
> >> They are right. HTTP != TCP.
> > i didnt confuse that detail.  it was unknown to me that HAProxy could
> > not tie layer 7 status to layer 3/4 actions.  the decisions they made
> > and how they architected the app is why they cannot do this, not that it
> > is technically impossible to do it.  i may be spo

Re: [squid-users] load balancing and site failover

2015-03-25 Thread Brendan Kearney
On Wed, 2015-03-25 at 15:03 +1300, Amos Jeffries wrote:
> On 25/03/2015 9:55 a.m., brendan kearney wrote:
> > Was not sure if bugzilla was used for mailing list issues.  If you would
> > like me to open one, I will but it looks like the list is working again.
> 
> Bugzilla is used, list bugs under the "project services" product.
> 
> 
> As for your query...
> 
> > On Mar 24, 2015 2:25 PM, "Brendan Kearney" wrote:
> > 
> >> On Tue, 2015-03-24 at 10:18 -0400, Brendan Kearney wrote:
> >>> while load balancing is not a requirement in a proxy environment, it
> >>> does afford a great deal of functionality, scaling and fault tolerance
> >>> in one.  several if not many on this list probably employ them for their
> >>> proxies and likely other technologies, but they are not all created
> >>> equal.
> >>>
> >>> i recently looked to see if a specific feature was in HAProxy.  i was
> >>> looking to see if HAProxy could reply to a new connection with a RST
> >>> packet if no pool member was available.
> >>>
> >>> the idea behind this is, if all of the proxies are not passing the
> >>> service check and are marked down by the load balancer, the reply of a
> >>> RST in the TCP handshake (i.e. SYN -> RST, not SYN -> SYN/ACK -> ACK)
> >>> tells the browser to failover to the next proxy assigned by the PAC
> >>> file.
> >>>
> >>> where i work, we have this configuration working.  the load balancers
> >>> are configured with the option to send a reset when no proxy is
> >>> available in the pool.  the PAC file assigns all 4 of the proxy VIPs in
> >>> a specific order based on which proxy VIP is assigned as the primary.
> >>> In every case, if the primary VIP does not have an available pool
> >>> member, the browser fails over to the next in the list.  failover would
> >>> happen again, if the secondary VIP replies with a RST during the
> >>> connection establishing.  the process repeats until a TCP connection
> >>> establishes or all proxies assigned have been exhausted.  the browser
> >>> will use the proxy VIP that it successfully connects to, for the
> >>> duration of the session.  once the browser is closed and reopened, the
> >>> evaluation of the PAC file occurs again, and the process starts anew.
> >>> plug-ins such as Proxy Selector are the exception to this, and can be
> >>> used to reevaluate a PAC file by selecting it for use.
> >>>
> >>> we have used this configuration several times, when we found an ISP link
> >>> was flapping or some other issue more global in nature than just the
> >>> proxies was affecting our egress and internet access.  i can attest to
> >>> the solution as working and elegantly handling site wide failures.
> >>>
> >>> being that the solutions where i work are proprietary commercial
> >>> products, i wanted to find an open source product that does this.  i
> >>> have been a long time user of HAProxy, and have recommended it for
> >>> others here, but sadly they cannot perform this function.  per their
> >>> mailing list, they use the network stack of the OS for connection
> >>> establishment and cannot cause a RST to be sent to the client during a
> >>> TCP handshake if no pool member is available.
> >>>
> >>> they suggested an external helper that manipulates IPTables rules based
> >>> on a pool member being available.  they do not feel that a feature like
> >>> this belongs in a layer 4/7 reverse proxy application.
> 
> They are right. HTTP != TCP.
i didnt confuse that detail.  it was unknown to me that HAProxy could
not tie layer 7 status to layer 3/4 actions.  the decisions they made
and how they architected the app is why they cannot do this, not that it
is technically impossible to do it.  i may be spoiled because i work
with equipment that can do this for me.
> 
> In particular TCP depends on routers having a full routing map of the
> entire Internet (provided by BGP) and deciding the best upstream hop
> based on that global info. Clients have one (and only one) upstream
> router for each server they want to connect to.
i will contest this.  my router does not need a full BGP map to route
traffic locally on my LAN or remotely out its WAN interface.  hell, it
does not even run BGP, and i can still get to the intarwebs, no problem.
it too, only has one upstream router / default route.
> 
> In HTTP each proxy (aka router)

Re: [squid-users] load balancing and site failover

2015-03-24 Thread brendan kearney
Was not sure if bugzilla was used for mailing list issues.  If you would
like me to open one, I will but it looks like the list is working again.
On Mar 24, 2015 2:25 PM, "Brendan Kearney"  wrote:

> On Tue, 2015-03-24 at 10:18 -0400, Brendan Kearney wrote:
> > while load balancing is not a requirement in a proxy environment, it
> > does afford a great deal of functionality, scaling and fault tolerance
> > in one.  several if not many on this list probably employ them for their
> > proxies and likely other technologies, but they are not all created
> > equal.
> >
> > i recently looked to see if a specific feature was in HAProxy.  i was
> > looking to see if HAProxy could reply to a new connection with a RST
> > packet if no pool member was available.
> >
> > the idea behind this is, if all of the proxies are not passing the
> > service check and are marked down by the load balancer, the reply of a
> > RST in the TCP handshake (i.e. SYN -> RST, not SYN -> SYN/ACK -> ACK)
> > tells the browser to failover to the next proxy assigned by the PAC
> > file.
> >
> > where i work, we have this configuration working.  the load balancers
> > are configured with the option to send a reset when no proxy is
> > available in the pool.  the PAC file assigns all 4 of the proxy VIPs in
> > a specific order based on which proxy VIP is assigned as the primary.
> > In every case, if the primary VIP does not have an available pool
> > member, the browser fails over to the next in the list.  failover would
> > happen again, if the secondary VIP replies with a RST during the
> > connection establishing.  the process repeats until a TCP connection
> > establishes or all proxies assigned have been exhausted.  the browser
> > will use the proxy VIP that it successfully connects to, for the
> > duration of the session.  once the browser is closed and reopened, the
> > evaluation of the PAC file occurs again, and the process starts anew.
> > plug-ins such as Proxy Selector are the exception to this, and can be
> > used to reevaluate a PAC file by selecting it for use.
> >
> > we have used this configuration several times, when we found an ISP link
> > was flapping or some other issue more global in nature than just the
> > proxies was affecting our egress and internet access.  i can attest to
> > the solution as working and elegantly handling site wide failures.
> >
> > being that the solutions where i work are proprietary commercial
> > products, i wanted to find an open source product that does this.  i
> > have been a long time user of HAProxy, and have recommended it for
> > others here, but sadly they cannot perform this function.  per their
> > mailing list, they use the network stack of the OS for connection
> > establishment and cannot cause a RST to be sent to the client during a
> > TCP handshake if no pool member is available.
> >
> > they suggested an external helper that manipulates IPTables rules based
> > on a pool member being available.  they do not feel that a feature like
> > this belongs in a layer 4/7 reverse proxy application.
> >
> > my search for a load balancer solution went through ipvsadm, balance and
> > haproxy before i selected haproxy.  haproxy was more feature rich than
> > balance, and easier to implement than ipvsadm.  do any other list
> > members have a need for such a feature from their load balancers?  do
> > any other list members have site failover solutions that have been
> > tested or used and would consider sharing their design and/or pain
> > points?  i am not looking for secret sauce or confidential info, but
> > more high level architecture decisions and such.
> >
>
> trying to send this again, as it was rejected previously.
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] load balancing and site failover

2015-03-24 Thread Brendan Kearney
On Tue, 2015-03-24 at 10:18 -0400, Brendan Kearney wrote:
> while load balancing is not a requirement in a proxy environment, it
> does afford a great deal of functionality, scaling and fault tolerance
> in one.  several if not many on this list probably employ them for their
> proxies and likely other technologies, but they are not all created
> equal.
> 
> i recently looked to see if a specific feature was in HAProxy.  i was
> looking to see if HAProxy could reply to a new connection with a RST
> packet if no pool member was available.
> 
> the idea behind this is, if all of the proxies are not passing the
> service check and are marked down by the load balancer, the reply of a
> RST in the TCP handshake (i.e. SYN -> RST, not SYN -> SYN/ACK -> ACK)
> tells the browser to failover to the next proxy assigned by the PAC
> file.
> 
> where i work, we have this configuration working.  the load balancers
> are configured with the option to send a reset when no proxy is
> available in the pool.  the PAC file assigns all 4 of the proxy VIPs in
> a specific order based on which proxy VIP is assigned as the primary.
> In every case, if the primary VIP does not have an available pool
> member, the browser fails over to the next in the list.  failover would
> happen again, if the secondary VIP replies with a RST during the
> connection establishing.  the process repeats until a TCP connection
> establishes or all proxies assigned have been exhausted.  the browser
> will use the proxy VIP that it successfully connects to, for the
> duration of the session.  once the browser is closed and reopened, the
> evaluation of the PAC file occurs again, and the process starts anew.
> plug-ins such as Proxy Selector are the exception to this, and can be
> used to reevaluate a PAC file by selecting it for use.
> 
> we have used this configuration several times, when we found an ISP link
> was flapping or some other issue more global in nature than just the
> proxies was affecting our egress and internet access.  i can attest to
> the solution as working and elegantly handling site wide failures.
> 
> being that the solutions where i work are proprietary commercial
> products, i wanted to find an open source product that does this.  i
> have been a long time user of HAProxy, and have recommended it for
> others here, but sadly they cannot perform this function.  per their
> mailing list, they use the network stack of the OS for connection
> establishment and cannot cause a RST to be sent to the client during a
> TCP handshake if no pool member is available.
> 
> they suggested an external helper that manipulates IPTables rules based
> on a pool member being available.  they do not feel that a feature like
> this belongs in a layer 4/7 reverse proxy application.
> 
> my search for a load balancer solution went through ipvsadm, balance and
> haproxy before i selected haproxy.  haproxy was more feature rich than
> balance, and easier to implement than ipvsadm.  do any other list
> members have a need for such a feature from their load balancers?  do
> any other list members have site failover solutions that have been
> tested or used and would consider sharing their design and/or pain
> points?  i am not looking for secret sauce or confidential info, but
> more high level architecture decisions and such.
> 

trying to send this again, as it was rejected previously.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid will not authenticate NTLM/Kerberos when behind a haproxy load balancer

2015-03-19 Thread Brendan Kearney
On Thu, 2015-03-19 at 19:32 -0600, Samuel Anderson wrote:
> Hey, I actually just figured it out. literally about 2 minutes ago.
> 
> 
> I changed the mode from (http) to (tcp) in the HAPROXY.CFG
> 
> 
> It looks like its able to authenticate again. Thanks for the
> response.  
> 
> On Thu, Mar 19, 2015 at 7:27 PM, Brendan Kearney 
> wrote:
> On Thu, 2015-03-19 at 19:01 -0600, Samuel Anderson wrote:
> > Hello All,
> >
> >
> > I have 2 squid servers that authenticate correctly when you
> point your
> > browser to either of them. I'm using a negotiate_wrapper. I
> set it up
> > following this
> >
> 
> (http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory)
> >
> >
> > I would like to set both servers behind a haproxy load
> balancer,
> > however when you try to utilize the haproxy load balancer,
> it will not
> > authenticate anymore. It just gives an error asking to
> authenticate.
> >
> >
> > Any ideas?
> >
> >
> > Thanks in advance.
> >
> >
> >
> >
> >
> >
> > ##HAPROXY.CFG##
> >
> >
> > global
> > log /dev/log local0
> > log /dev/log local1 notice
> > chroot /var/lib/haproxy
> > user haproxy
> > group haproxy
> > daemon
> >
> >
> > defaults
> > log global
> > mode http
> > option httplog
> > option dontlognull
> > contimeout 5000
> > clitimeout 5
> > srvtimeout 5
> >
> >
> > # reverse proxy-squid
> > listen  proxy 10.10.0.254:3128
> > mode http
> > cookie  SERVERID insert indirect nocache
> > balance roundrobin
> > option httpclose
> > option forwardfor header X-Client
> > server  squid1 10.10.0.253:3128 check inter 2000
> rise 2 fall 5
> > server  squid2 10.10.0.252:3128 check inter 2000
> rise 2 fall 5
> >
> >
> >
> >
> >
> >
> >
> >
> > ##SQUID.CONF##
> >
> >
> >
> >
> > #Kerberos and NTLM authentication
> > auth_param negotiate
> program /usr/local/bin/negotiate_wrapper
> > --ntlm /usr/bin/ntlm_auth --diagnostics
> > --helper-protocol=squid-2.5-ntlmssp --domain=.LOCAL
> > --kerberos /usr/lib/squid3/negotiate_kerberos_auth -d -s
> GSS_C_NO_NAME
> > auth_param negotiate children 30
> > auth_param negotiate keep_alive off
> >
> >
> > # LDAP authentication
> > auth_param basic program /usr/lib/squid3/basic_ldap_auth -R
> -b
> > "DC=,DC=local" -D "CN=SQUID,OU=Service
> Accounts,DC=,DC=local"
> > -w "" -f sAMAccountName=%s -h
> > 10.0.0.200,10.0.0.199,10.0.0.194,10.0.0.193
> > auth_param basic children 150
> > auth_param basic realm Please enter your Domain credentials
> to
> > continue
> > auth_param basic credentialsttl 1 hour
> >
> >
> > # AD group membership commands
> > external_acl_type ldap_group ttl=60 children-startup=10
> > children-max=50 children-idle=2 %
> > LOGIN /usr/lib/squid3/ext_ldap_group_acl -R -K -S -b
> > "DC=,DC=local" -D "CN=SQUID,OU=Service
> Accounts,DC=,DC=local"
> > -w "" -f "(&(objectclass=person) (sAMAccountname=%
> v)(memberof=CN=%
> > a,OU=PROXY,ou=ALL  Groups,DC=,DC=local))" -h
> > dc1..local,dc2..local,dc3..local,dc4..local
> >
> >
> > acl auth proxy_auth REQUIRED
> >
> >
> >
> > acl REQGROUPS external ldap_group PROXY-HIGHLY-RESTRICTIVE
> > PROXY-MEDIUM-RESTRICTIVE PROXY-MINIMAL-RESTRICTIVE
> PROXY

Re: [squid-users] Squid will not authenticate NTLM/Kerberos when behind a haproxy load balancer

2015-03-19 Thread Brendan Kearney
On Thu, 2015-03-19 at 19:01 -0600, Samuel Anderson wrote:
> Hello All,
> 
> 
> I have 2 squid servers that authenticate correctly when you point your
> browser to either of them. I'm using a negotiate_wrapper. I set it up
> following this
> (http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory)
>  
> 
> 
> I would like to set both servers behind a haproxy load balancer,
> however when you try to utilize the haproxy load balancer, it will not
> authenticate anymore. It just gives an error asking to authenticate.
> 
> 
> Any ideas?
> 
> 
> Thanks in advance.
> 
> 
> 
> 
> 
> 
> ##HAPROXY.CFG##
> 
> 
> global
> log /dev/log local0
> log /dev/log local1 notice
> chroot /var/lib/haproxy
> user haproxy
> group haproxy
> daemon
> 
> 
> defaults
> log global
> mode http
> option httplog
> option dontlognull
> contimeout 5000
> clitimeout 5
> srvtimeout 5
> 
> 
> # reverse proxy-squid
> listen  proxy 10.10.0.254:3128
> mode http
> cookie  SERVERID insert indirect nocache
> balance roundrobin
> option httpclose
> option forwardfor header X-Client
> server  squid1 10.10.0.253:3128 check inter 2000 rise 2 fall 5
> server  squid2 10.10.0.252:3128 check inter 2000 rise 2 fall 5
> 
> 
> 
> 
> 
> 
> 
> 
> ##SQUID.CONF##
> 
> 
> 
> 
> #Kerberos and NTLM authentication
> auth_param negotiate program /usr/local/bin/negotiate_wrapper
> --ntlm /usr/bin/ntlm_auth --diagnostics
> --helper-protocol=squid-2.5-ntlmssp --domain=.LOCAL
> --kerberos /usr/lib/squid3/negotiate_kerberos_auth -d -s GSS_C_NO_NAME
> auth_param negotiate children 30
> auth_param negotiate keep_alive off
> 
> 
> # LDAP authentication
> auth_param basic program /usr/lib/squid3/basic_ldap_auth -R -b
> "DC=,DC=local" -D "CN=SQUID,OU=Service Accounts,DC=,DC=local"
> -w "" -f sAMAccountName=%s -h
> 10.0.0.200,10.0.0.199,10.0.0.194,10.0.0.193
> auth_param basic children 150
> auth_param basic realm Please enter your Domain credentials to
> continue
> auth_param basic credentialsttl 1 hour
> 
> 
> # AD group membership commands
> external_acl_type ldap_group ttl=60 children-startup=10
> children-max=50 children-idle=2 %
> LOGIN /usr/lib/squid3/ext_ldap_group_acl -R -K -S -b
> "DC=,DC=local" -D "CN=SQUID,OU=Service Accounts,DC=,DC=local"
> -w "" -f "(&(objectclass=person) (sAMAccountname=%v)(memberof=CN=%
> a,OU=PROXY,ou=ALL  Groups,DC=,DC=local))" -h
> dc1..local,dc2..local,dc3..local,dc4..local
> 
> 
> acl auth proxy_auth REQUIRED
> 
> 
> 
> acl REQGROUPS external ldap_group PROXY-HIGHLY-RESTRICTIVE
> PROXY-MEDIUM-RESTRICTIVE PROXY-MINIMAL-RESTRICTIVE PROXY-UNRESTRICTED
> PROXY-DEV PROXY-SALES
> 
> 
> http_access deny !auth all
> http_access deny !REQGROUPS all
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> Samuel Anderson  |  Information Technology Administrator  |
>  International Document Services
> 
> 
> IDS  |  11629 South 700 East, Suite 200  |  Draper, UT 84020-4607
> 
> 
> 
> CONFIDENTIALITY NOTICE:
> This e-mail and any attachments are confidential. If you are not an
> intended recipient, please contact the sender to report the error and
> delete all copies of this message from your system.  Any unauthorized
> review, use, disclosure or distribution is prohibited.

how did you create and distribute the keytab for the proxies?  you must
create one keytab and put the same exact one on each of the proxies.
the KVNO numbers must match on every proxy.  run "klist
-Kket /path/to/the.keytab" on the proxies to check.

kerberos is heavily dependent on DNS.  the keytab should contain
PRIMARY/instance.domain.tld@REALM where PRIMARY is HTTP,
instance.domain.tld is the FQDN of the 10.10.0.254 IP, not either or
both of the individual proxies, and REALM should be the Kerberos REALM.

did you export the environment variable for the keytab?  on fedora, i
put the following in /etc/sysconfig/squid:

KRB5_KTNAME=/etc/squid/squid.keytab
export KRB5_KTNAME

do you get a HTTP ticket from the directory?  from a command prompt,
what does "klist tickets" show?  you can also install the XP resource
kit and run kerbtray.exe to get that info.  win7 and newer may have it
built in.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refresh ACL list only

2015-03-17 Thread Brendan Kearney
On Tue, 2015-03-17 at 16:13 -0300, Marcus Kool wrote:
> it has a configuration option to respond with
> 'allow all' during a reconfiguration.

a Fail-Open policy can be a security gap, and should be considered
carefully before implementing.  the intention of the whitelisted URLs is
to prevent access to content that is otherwise forbidden.  failing open,
even briefly, undermines that control.  what is the default setting
there?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refresh ACL list only

2015-03-17 Thread Brendan Kearney
On Wed, 2015-03-18 at 00:08 +0600, Yuri Voinov wrote:
> Brendan reads my thoughts. :)
> 
> You can, of course, use two or more squid instances and Cisco with
> configured WCCP protocol before it. WCCP can plays with several cache
> instances in load balancing role. Running squid at this moment sends
> "here I am" messages to WCCP-enabled router, which will redirect
> traffic on alive cache. The same time you can reconfigure second squid
> instance a visa versa.
> 
> 18.03.15 0:00, Brendan Kearney пишет:
> > On Tue, 2015-03-17 at 11:59 -0600, Samuel Anderson wrote:
> >> Unfortunately thats not really an option for me. I've already
> >> built everything just using squid. It works great and does
> >> everything I need it to do with the exception of refreshing the
> >> ACL lists. I just need to find a way to refresh those single
> >> lists without disrupting Internet traffic to the users. If anyone
> >> knows how to do this I would greatly appreciate it.
> >> 
> >> On Tue, Mar 17, 2015 at 11:39 AM, Yuri Voinov
> >>  wrote:
> > Did you hear about rewriters and filters? I.e., squidGuard, or 
> > Dansguardian? Or, of course 
> > https://www.urlfilterdb.com/products/ufdbguard.html ? It has
> > separate server process which can be restart VERY quickly 
> > independently of squid.
> > 
> > 17.03.15 23:35, Samuel Anderson пишет:
> >> Hello all,
> > 
> >> Does anyone know of a way to reload a single ACL list? I
> > have a
> >> very complicated and large config file that takes around 30
> > seconds
> >> to reload when I run the (squid3 -k reconfigure) command. I
> > have
> >> several ACL lists that need to be updated throughout the day
> > and it
> >> would be nice if I could only reload those ACL lists and not
> > the
> >> entire config. Its problematic because while its reloading,
> > the
> >> server is effectively down and disrupts Internet access for
> > the
> >> rest of the users. Below is a small sample of the lists that
> > will
> >> be updated. If I could add a TTL to the lists so squid would
> > reload
> >> them periodically without a full reconfigure would be ideal.
> > 
> > 
> > 
> >> acl GLOBAL-WHITELIST dstdomain 
> >> "/etc/squid3/whitelists/GLOBAL-WHITELIST" acl 
> >> UNRESTRICTED-WHITELIST dstdomain 
> >> "/etc/squid3/whitelists/UNRESTRICTED-WHITELIST" acl
> > DEV-WHITELIST
> >> dstdomain "/etc/squid3/whitelists/DEV-WHITELIST" acl 
> >> SALES-WHITELIST dstdomain
> > "/etc/squid3/whitelists/SALES-WHITELIST"
> > 
> > 
> >> Thanks
> > 
> > 
> > 
> > 
> >> ___ squid-users
> > mailing
> >> list squid-users@lists.squid-cache.org 
> >> http://lists.squid-cache.org/listinfo/squid-users
> > 
> >> ___ squid-users
> >> mailing list squid-users@lists.squid-cache.org 
> >> http://lists.squid-cache.org/listinfo/squid-users
> >> 
> >> 
> >> 
> >> 
> >> -- Samuel Anderson  |  Information Technology Administrator  | 
> >> International Document Services
> >> 
> >> 
> >> IDS  |  11629 South 700 East, Suite 200  |  Draper, UT
> >> 84020-4607
> >> 
> >> 
> >> 
> >> CONFIDENTIALITY NOTICE: This e-mail and any attachments are
> >> confidential. If you are not an intended recipient, please
> >> contact the sender to report the error and delete all copies of
> >> this message from your system.  Any unauthorized review, use,
> >> disclosure or distribution is prohibited. 
> >> ___ squid-users
> >> mailing list squid-users@lists.squid-cache.org 
> >> http://lists.squid-cache.org/listinfo/squid-users
> > 
> > do you have the luxury of multiple squid instances behind a load 
> > balancer?  mark one offline at the LB, reconfigure, mark online at
> > the LB.  Lather, rinse, repeat.
> > 
> > ___ squid-users mailing
> > list squid-users@lists.squid-cache.org 
> > http://lists.squid-cache.org/listinfo/squid-users
> > 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

i use haproxy to load balance 2 squid instances.  using this:

http://serverfault.com/questions/249316/how-can-i-remove-balanced-node-from-haproxy-via-command-line

you should be able to setup a process to mark you boxes offline, at
will, thereby allowing you to reconfigure your instances.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refresh ACL list only

2015-03-17 Thread Brendan Kearney
On Tue, 2015-03-17 at 11:59 -0600, Samuel Anderson wrote:
> Unfortunately thats not really an option for me. I've already built
> everything just using squid. It works great and does everything I need
> it to do with the exception of refreshing the ACL lists. I just need
> to find a way to refresh those single lists without disrupting
> Internet traffic to the users. If anyone knows how to do this I would
> greatly appreciate it.
> 
> On Tue, Mar 17, 2015 at 11:39 AM, Yuri Voinov 
> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Did you hear about rewriters and filters? I.e., squidGuard, or
> Dansguardian? Or, of course
> https://www.urlfilterdb.com/products/ufdbguard.html
> ?
> It has separate server process which can be restart VERY
> quickly
> independently of squid.
> 
> 17.03.15 23:35, Samuel Anderson пишет:
> > Hello all,
> >
> > Does anyone know of a way to reload a single ACL list? I
> have a
> > very complicated and large config file that takes around 30
> seconds
> > to reload when I run the (squid3 -k reconfigure) command. I
> have
> > several ACL lists that need to be updated throughout the day
> and it
> > would be nice if I could only reload those ACL lists and not
> the
> > entire config. Its problematic because while its reloading,
> the
> > server is effectively down and disrupts Internet access for
> the
> > rest of the users. Below is a small sample of the lists that
> will
> > be updated. If I could add a TTL to the lists so squid would
> reload
> > them periodically without a full reconfigure would be ideal.
> >
> >
> >
> > acl GLOBAL-WHITELIST dstdomain
> > "/etc/squid3/whitelists/GLOBAL-WHITELIST" acl
> > UNRESTRICTED-WHITELIST dstdomain
> > "/etc/squid3/whitelists/UNRESTRICTED-WHITELIST" acl
> DEV-WHITELIST
> > dstdomain "/etc/squid3/whitelists/DEV-WHITELIST" acl
> > SALES-WHITELIST dstdomain
> "/etc/squid3/whitelists/SALES-WHITELIST"
> >
> >
> > Thanks
> >
> >
> >
> >
> > ___ squid-users
> mailing
> > list squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> >
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
> 
> iQEcBAEBAgAGBQJVCGblAAoJENNXIZxhPexGRqwIAIS3iw5wIt9FPi85aH
> +vWmA8
> QJYyo8ChpnTGsKnAgpAMoSRFobo6AZjL9ABrRx7kGz2NC/VAla93NNR7SKr
> +mDdr
> Z9jz9DRVRSAm4D1rC3+xvdQowoN2UraxYDj9fCQKczfU0whc4Qwool
> +n36gocPZH
> I0nSbv40MhSTCO/Zybo1eonW/VQ4i9LopGFVI5q
> +dYwRRleu8Rh4Pg1qRBRzmKa4
> 5O
> +yCglKumIzMe4Pqa2JFQ6oq9VAimEslin7hoXS1VXRH8lE9Hbg0kKpuaWEiyFG
> ySmdKoFu1O70Ffug48vXi1EQXAkE5C6xmtBHlCBxtiOf8kFoUHkyslJtEniA8Yw=
> =+8IA
> -END PGP SIGNATURE-
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 
> 
> 
> 
> -- 
> Samuel Anderson  |  Information Technology Administrator  |
>  International Document Services
> 
> 
> IDS  |  11629 South 700 East, Suite 200  |  Draper, UT 84020-4607
> 
> 
> 
> CONFIDENTIALITY NOTICE:
> This e-mail and any attachments are confidential. If you are not an
> intended recipient, please contact the sender to report the error and
> delete all copies of this message from your system.  Any unauthorized
> review, use, disclosure or distribution is prohibited.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

do you have the luxury of multiple squid instances behind a load
balancer?  mark one offline at the LB, reconfigure, mark online at the
LB.  Lather, rinse, repeat.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Log proxy hostname along with HTTP access URI

2015-02-24 Thread Brendan Kearney
On Tue, 2015-02-24 at 15:04 +0100, Peter Oruba wrote:
> Hello everybody,
> 
> 
> I’d like to distinguish multiple clients that are behind NAT from
> Squid’s perspective. Proxy authentication or sessions are not an
> option for different reasons and the idea that came up was to assign
> each client a unique hostname through which Squid would be addressed
> (e.g. UUID1.proxy.example.com and UUID2.proxy.example.com) A DNS
> wildcard entry *.proxy.example.com would make sure each proxy referral
> points to the same machine. Question: Is there a way to let Squid log
> the DNS name through which a client referred to it? I was not able to
> find any example in this regard and I assume that the proxy hostname
> is „lost“ after the client's DNS lookup and that the client-proxy
> connection is established.
> 
> 
> Thanks,
> Peter
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

with the below directives, you can avoid all of this grief and reference
the client ip address where you need to.  just be sure the NAT adds the
XFF header.

#  TAG: follow_x_forwarded_for
#  TAG: acl_uses_indirect_clienton|off
#  TAG: delay_pool_uses_indirect_client on|off
#  TAG: log_uses_indirect_clienton|off
#  TAG: tproxy_uses_indirect_client on|off


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] benefits of using ext_kerberos_ldap_group_acl instead of ext_ldap_group_acl

2015-01-20 Thread Brendan Kearney
On Wed, 2015-01-21 at 02:10 +1300, Amos Jeffries wrote:
> On 21/01/2015 1:38 a.m., Simon Staeheli wrote:
> >> Whatever floats your boat. The point of the Addon/Plugin/helpers
> >> API is that you can use scripts if thy serve your needs better.
> >> 
> >> All the usual Open Source benefits of "many eyeballs" and
> >> somebody else doing code maintenance for you applies to using a
> >> bundled helper over a custom written one.
> >> 
> >> Beyond that the kerberos helper also provides automatic detection
> >> of which LDAP server to use via mutiple auto-configuration
> >> methods.
> >> 
> >> If you can demonstrate that the ext_kerberos_ldap_group_acl does 
> >> provides a superset of the functionality of ext_ldap_group_acl
> >> helper then I can de-duplicate the two helpers.
> >> 
> >> Amos
> > 
> > Thanks for the hint regarding automatic detection of LDAP servers.
> > I am just trying to find what the differences between the two
> > helpers are and which one does fit my needs better. Any others?
> > 
> 
> Nothing I can pick out easily.
> 
> > Do you know anything about the feature in
> > ext_kerberos_ldap_group_acl mentioned by Markus Moeller in an
> > earlier post?
> > 
> > "I have a new method in my squid 3.4 patch which uses the Group 
> > Information MS is putting in the ticket. This would eliminate the
> > ldap lookup completely." 
> > (http://www.squid-cache.org/mail-archive/squid-users/201309/0046.html)
> >
> > 
> I think that refers to a work in progress. Markus maintains the
> un-bundled version of his helpers a little in advance of what has made
> it into the Squid stable branch. Some of what is available in his
> helper downloads is only in the Squid-3.HEAD alpha development code so
> far.
> 
> I am working on obsoleting the need for external group helpers. From
> 3.5 auth helpers can deliver to Squid a set of group= kv-pair in their
> response. Those can be used with the note ACL type to check group
> names without any external_acl_type helper lookup (making group checks
> possible in 'fast' access controls).

will the 'fast' acl's (or the underlying code) use the kerberos keytab
as an option for authentication to ldap?  this will remove the
credentials from a plain text file on the filesystem.

> Markus joined me in this project and his latest kerberos auth helper
> (in 3.HEAD and his versions - *not* the 3.5 bundled version) produces
> group= kv-pair. Unfortunately they are in the obscure S-*-*-* registry
> ID format MS uses. The external_acl_type helper interface cannot yet
> be passed notes to decipher that to a known group name.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] {Disarmed} Re: site cannot be accessed

2015-01-13 Thread Brendan Kearney
On Tue, 2015-01-13 at 09:30 +0200, Eliezer Croitoru wrote:
> Hey,
> 
> Did you had the chance to see this page:
> http://findproxyforurl.com/example-pac-file/
> 
> Eliezer
> 
> On 13/01/2015 06:22, Simon Dcunha wrote:
> > Dear Sarfraz,
> > appreciate your immediate reply
> >
> > Heres attached is my pac file
> > i am accessing the 10.101.101.10 server
> >
> > regards
> >
> > simon
> > 
> >
> >
> >
> >
> >
> >
> > From: "***some text missing***" 
> > To: "simon" , "squid-users" 
> > 
> > Sent: Monday, January 12, 2015 1:18:06 PM
> > Subject: {Disarmed} Re: [squid-users] site cannot be accessed
> >
> >
> > Share your PAC file please.
> >
> > Regards,
> > Sarfraz
> >
> >
> > From: Simon Dcunha 
> > To: squid-users 
> > Sent: Monday, January 12, 2015 11:41 AM
> > Subject: [squid-users] site cannot be accessed
> >
> >
> > Dear All,
> >
> > I have squid-3.1.10-22.el6_5.x86_64 running on centos 6.5 64 bit for quite 
> > sometime and working fine
> > just a couple of days back some users reported an issue
> >
> > i have a intranet site which just stopped working .
> > if I uncheck the proxy option in the browser the site works fine
> > the above users also use internet and is working fine
> >
> > I am using the pac file to bypass local sites and the local intranet 
> > websites are alredy added in the pac file
> >
> > also i am quite sure the above intranet website were working before
> >
> > the squid log shows
> > 
> > 1421053747.139 70984 172.16.6.21 TCP_MISS/000 0 GET MailScanner warning: 
> > numerical links are often malicious: http://10.101.101.10/ - 
> > DIRECT/10.101.101.10 -
> > 1421053779.524 32021 172.16.6.21 TCP_MISS/000 0 GET MailScanner warning: 
> > numerical links are often malicious: http://10.101.101.10/ - 
> > DIRECT/10.101.101.10 -
> > --
> >
> > appreciate your advice and concern
> >
> > regards
> >
> > simon
> >
> >
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> >
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

an example pac file i started with...
function FindProxyForURL(url,host)
{

	//The DOMAINS listed in this array will be send direct -- unproxied
	var direct_access_domains = new Array();
		direct_access_domains[0] = ".domain.tld";

	//The HOSTS listed in this array will be send direct -- unproxied
	var direct_access_hosts = new Array();
		direct_access_hosts[0] = "host.domain.tld";

	//The NETWORKS listed in these arrays will be sent direct -- unproxied
	//This is an associative array -- the network and mask are in separate arrays
	//Please be carefull when modifying this
	var direct_access_nets = new Array();
	var direct_access_mask = new Array();
		direct_access_nets[0] = "192.168.0.0";
		direct_access_mask[0] = "255.255.0.0";

	//DOMAINS in this array override all other logic in this script and are forced through the proxy
	var proxied_domains = new Array();
		proxied_domains[0] = "proxied-domain.tld";
		proxied_domains[1] = ".proxied-domain.tld";

	//HOSTS in this array override all other logic in this script and are forced through the proxy
	var proxied_hosts = new Array();
		proxied_hosts[0] = "host.proxied-domain.tld";

	//HOSTS in this array override all other logic in this script and are forced through proxy1
	var proxy1_hosts = new Array();
		proxy1_hosts[0] = "direct1.domain.tld";

	//HOSTS in this array override all other logic in this script and are forced through proxy2
	var proxy2_hosts = new Array();
		proxy2_hosts[0] = "direct2.domain.tld";

	var home_source_nets = new Array();
	var home_source_mask = new Array();
		home_source_nets[0] = "192.168.1.0";
		home_source_mask[0] = "255.255.255.0";
		home_source_nets[1] = "192.168.2.0";
		home_source_mask[1] = "255.255.255.0";

	var vpn_source_nets = new Array();
	var vpn_source_mask = new Array();
		vpn_source_nets[0] = "192.168.3.0";
		vpn_source_mask[0] = "255.255.255.0";

	//INITIALIZE VARIABLES
	var proxy_code = 0;
	var isnumeric = 0;
	var loc = 0;
	var myip = myIpAddress();

	//evaluate source IP to determine if user should utilize a proxy in a particular order
	for (var i=0; i___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] what are people using nowadays (icap, a/v, etc)?

2014-12-21 Thread Brendan Kearney
i have been running Squid with DansGuardian, ClamAV and Privoxy for
quite some time, and have been successful and moderately pleased with
functionality and performance.

while DG has been a means for me to perform A/V scanning at the
infrastructure layer via ClamAV, the penalty has been losing HTTP/1.1
compression and cache controls.  because DG downgrades everything to
HTTP/1.0, i feel i am not getting the most out of my squid instance
(through no fault of the software, or those who write/support it).

i am looking to move with the times, and find out what people are using
these days.  c-icap seems to use libclamav for scanning, which would
suffice, but it is not available on fedora as an rpm in repos, it seems.
i would much prefer to have rpms from repos that are updated then have
to install packages that i rolled myself.

libecap is available, but i dont know if i need more than just that to
do a/v scanning.  squidguard, also is available, but that does not look
like it interfaces with clamav.

what are people using these days, and what feedback do you have on the
setup you have?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] citrix receiver not authenticating with squid

2014-12-16 Thread Brendan Kearney
On Tue, 2014-12-16 at 19:40 +0100, Natxo Asenjo wrote:
> hi,
> 
> we have 2 centos 6 hosts providing a load-balanced squid service
> (behind keepalived and haproxy; haproxy sends requests to both squids)
> and authenticating users against an Active Directory environment. This
> is working really nice.
> 
> Our users log in their desktops and using the negotiate authenticator
> squid_kerb_auth they get automatically logged in the proxies. As a
> fall back for users using them but not logging in to the kerberos AD
> domain, we offer ldap authentication as well. That works fine too.
> 
> However, some of our users need to log in to other organizations
> desktops using the citrix reciever plugin and Internet Explorer. And
> there it fails. The plugin does not use the negotiate authenticator
> apparently so it falls back to the ldap authenticator. This works for
> a few minutes, but after some time the receiver ldap authentication
> pop up re-appears, and then again, and again. Not nice.
> 
> Does anyone have squid working to access citrix vpn sites without this
> problem? Do you know what setting to tweak?
> 
> Could it be that the load-balanced setting is provoking this? Should I
> have the haproxy config as a primary/slave instead of both masters?
> 
> This is a piece of the log file:
> 
> 172.20.4.33 - - [16/Dec/2014:14:59:47 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 407 3996 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_DENIED:NONE
> 172.20.4.33 - - [16/Dec/2014:14:59:48 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 407 3996 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_DENIED:NONE
> 172.20.4.33 - - [16/Dec/2014:14:59:48 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 407 3996 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_DENIED:NONE
> 172.20.4.33 - user@DOMAIN [16/Dec/2014:15:00:03 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 200 20472 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_MISS:DIRECT
> 172.20.4.33 -user@DOMAIN [16/Dec/2014:15:00:03 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 200 41726 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_MISS:DIRECT
> 172.20.4.33 -user@DOMAIN [16/Dec/2014:15:00:28 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 200 20447 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_MISS:DIRECT
> 172.20.4.33 - - [16/Dec/2014:15:01:37 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 407 3996 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_DENIED:NONE
> 172.20.4.33 -user@DOMAIN [16/Dec/2014:15:01:54 +0100] "CONNECT
> login.site.com:443 HTTP/1.0" 200 32958 "-" "Mozilla/5.0 (compatible;
> MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" TCP_MISS:DIRECT
> 
> My squid.conf for completeness
> 
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> 
> auth_param negotiate program /usr/lib/squid/squid_kerb_auth -i -s
> HTTP/proxy.domain@domain.tld
> auth_param negotiate children 10
> auth_param negotiate keep_alive on
> acl auth proxy_auth REQUIRED
> 
> auth_param basic program /usr/lib/squid/squid_ldap_auth -b
> dc=domain,dc=tld -f "samaccountname=%s" -s sub -D user -W
> /etc/squid/squid_ldap_bi
> nd -h dc1.domain.tld,dc2.domain.tld,dc3.domain.tld -p 3268 -Z
> auth_param basic children 10
> auth_param basic realm Proxy LDAP Authentication
> auth_param basic credentialsttl 8 hours
> 
> acl SSL_ports port 443
> acl SSL_ports port 1494
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> 
> #
> # Recommended minimum Access Permission configuration:
> #
> # Only allow cachemgr access from localhost
> http_access allow manager localhost
> http_access deny manager
> 
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> 
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> 
> http_access allow localhost
> 
> http_access deny !auth
> http_access allow auth
> 
> http_access deny all
> 
> # Squid normally listens to port 3128
> http_port 3128
> 
> # We recommend you to use at least the following line.
> hierarchy_stoplist cgi-bin ?
> 
> logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs % "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
> access_log /var/log/squid/combined.log combined
> 
> Thanks in advance.
> 
> --
> Groeten,
> natxo
> ___
> squid-

Re: [squid-users] Cascading different authentification methods

2014-11-27 Thread Brendan Kearney
On Thu, 2014-11-27 at 02:24 -0800, christianmolecki wrote:
> Hello everyone,
> 
> we are using squid 3.4.6 with ntlm authentification.
> Depending on ActiveDirectory group memberships, the user is able to use
> different protocols.
> This works very well.
> 
> Now we need for some websites an additional basic authentication.
> So I configured the basic ncsa_auth helper.
> This works also, but only if the ntlm_auth helper is disables.
> 
> How I can authentificate via ntlm + basic?
> 
> Is this generally possible?
> 
> 
> Best Regards
> Christian
> 
> 
> 
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Cascading-different-authentification-methods-tp4668532.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

why do you need to authenticate the users differently, for different
sites?  the auth for the proxies (indicated by an HTTP/407 status code)
is completely different than the auth for a web site/server (indicated
by an HTTP/401 status code).

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-19 Thread Brendan Kearney
On Wed, 2014-11-19 at 19:06 +0530, Nishant Sharma wrote:
> 
> On 19 November 2014 6:41:44 pm IST, brendan kearney  wrote:
> 
> >it
> >if the Content-Type header is not set to
> >"application/x-ns-proxy-autoconfig".
> >
> 
> Ah so that is why most of the java applets don't honour PAC settings and I 
> was blaming poor coding of those applets.
> 
> I usually serve PAC file with uhttpd or lighttpd servers running on the 
> gateways and never bothered to set correct content-type headers.
> 
> Would be great if you could include that in your document too.
> 
> Regards,
> Nishant
> 
> >GoToMeeting has also pissed me off.  The client parses the script and
> >takes
> >any value found in it, before executing the script and taking the
> >output of
> >the execution. This has the result of finding inappropriate proxies to
> >use,
> >when you are in a corporate environment and have proxies dedicated to
> >client access or other functions that should not be leveraged in all
> >cases.  I got their technical team on a call because we have a large
> >citrix
> >install base (both products have the same parent company) and
> >complained to
> >no avail.  I had to write a doc on how to correct the client config for
> >anyone needing to use GoTo... products.
> >On Nov 19, 2014 6:18 AM, "Kinkie"  wrote:
> >
> >> One word of caution: pactester uses the Firefox JavaScript engine,
> >which
> >> is more forgiving than MSIE's. So while it is a very useful tool, it
> >may
> >> let some errors slip through.
> >> On Nov 18, 2014 9:45 PM, "Jason Haar"  wrote:
> >>
> >>> On 19/11/14 01:39, Brendan Kearney wrote:
> >>> > i would suggest that if you use a pac/wpad solution, you look into
> >>> > pactester, which is a google summer of code project that executes
> >pac
> >>> > files and provides output indicating what actions would be
> >returned to
> >>> > the browser, given a URL.
> >>> couldn't agree more. We have it built into our QA to run before we
> >ever
> >>> roll out any change to our WPAD php script (a bug in there means
> >>> everyone loses Internet access - so we have to be careful).
> >>>
> >>> Auto-generating a PAC script per client allows us to change
> >behaviour
> >>> based on User-Agent, client IP, proxy and destination - and allows
> >us to
> >>> control what web services should be DIRECT and what should be
> >proxied.
> >>> There is no other way of achieving those outcomes.
> >>>
> >>> Oh yes, and now that both Chrome and Firefox support proxies over
> >HTTPS,
> >>> I'm starting to ponder putting up some form of proxy on the Internet
> >for
> >>> our staff to use (authenticated of course!) - WPAD makes that
> >something
> >>> we could implement with no client changes - pretty cool :-)
> >>>
> >>> --
> >>> Cheers
> >>>
> >>> Jason Haar
> >>> Corporate Information Security Manager, Trimble Navigation Ltd.
> >>> Phone: +1 408 481 8171
> >>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
> >>>
> >>> ___
> >>> squid-users mailing list
> >>> squid-users@lists.squid-cache.org
> >>> http://lists.squid-cache.org/listinfo/squid-users
> >>>
> >>
> >> ___
> >> squid-users mailing list
> >> squid-users@lists.squid-cache.org
> >> http://lists.squid-cache.org/listinfo/squid-users
> >>
> >>
> >
> >
> >
> >
> >___
> >squid-users mailing list
> >squid-users@lists.squid-cache.org
> >http://lists.squid-cache.org/listinfo/squid-users
> 

i didn't mean to get your hopes up about the document i wrote.  i wrote
it for my employer and its details are specific to our environment.  i
am sure i could create something if people would want it, but i am not
sure which topic to provide documentation for.  is it the web server /
pac file stuff or the GoToMeeting stuff?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-19 Thread brendan kearney
Yes and it seems java is even more sensitive.  I had an array member
defined on a line that was not terminated with a semicolon and browsers did
not throw errors, but java did.  Pactester did not catch this.  Missing
curly braces and I think quotes are caught.

Also of note, you have to set the content type header for a pac file or
else you run into weird issues.  I found that browsers are forgiving and
will execute the script and take its output if the header is not set.
Flash does not do this.  It might call for the script but does not use it
if the Content-Type header is not set to
"application/x-ns-proxy-autoconfig".

GoToMeeting has also pissed me off.  The client parses the script and takes
any value found in it, before executing the script and taking the output of
the execution. This has the result of finding inappropriate proxies to use,
when you are in a corporate environment and have proxies dedicated to
client access or other functions that should not be leveraged in all
cases.  I got their technical team on a call because we have a large citrix
install base (both products have the same parent company) and complained to
no avail.  I had to write a doc on how to correct the client config for
anyone needing to use GoTo... products.
On Nov 19, 2014 6:18 AM, "Kinkie"  wrote:

> One word of caution: pactester uses the Firefox JavaScript engine, which
> is more forgiving than MSIE's. So while it is a very useful tool, it may
> let some errors slip through.
> On Nov 18, 2014 9:45 PM, "Jason Haar"  wrote:
>
>> On 19/11/14 01:39, Brendan Kearney wrote:
>> > i would suggest that if you use a pac/wpad solution, you look into
>> > pactester, which is a google summer of code project that executes pac
>> > files and provides output indicating what actions would be returned to
>> > the browser, given a URL.
>> couldn't agree more. We have it built into our QA to run before we ever
>> roll out any change to our WPAD php script (a bug in there means
>> everyone loses Internet access - so we have to be careful).
>>
>> Auto-generating a PAC script per client allows us to change behaviour
>> based on User-Agent, client IP, proxy and destination - and allows us to
>> control what web services should be DIRECT and what should be proxied.
>> There is no other way of achieving those outcomes.
>>
>> Oh yes, and now that both Chrome and Firefox support proxies over HTTPS,
>> I'm starting to ponder putting up some form of proxy on the Internet for
>> our staff to use (authenticated of course!) - WPAD makes that something
>> we could implement with no client changes - pretty cool :-)
>>
>> --
>> Cheers
>>
>> Jason Haar
>> Corporate Information Security Manager, Trimble Navigation Ltd.
>> Phone: +1 408 481 8171
>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-18 Thread Brendan Kearney
On Tue, 2014-11-18 at 08:35 -0300, Carlos Defoe wrote:
> Well, you just wrote a load balancer in PHP, with a load balancing
> algorithm in it. It serves the same purpose as HAproxy (I don't really
> use HAproxy, so I don't know, but I use the F5 big-ip which is
> perfectly capable of testing Internet links behind squid). In you
> scheme, WPAD is being used to tell the clients where the load balancer
> (a webserver with a php script) is, and PAC probably as the answer
> format, which returns a currently valid proxy node address directly to
> the client. But as far as I know, once the client gets the PAC answer,
> it willl not refresh until the browser is restarted, so it might be a
> small problem there.
> 
> But it is a good solution, as proved by your decade of using it, and
> much cheaper than a F5. As for the DNS trick, it is intended to
> increase high availability of the web servers that are serving
> wpad.dat (or your php script), because if it runs on only one
> webserver, at some point no clients will find anything at all.
> 
> Well, there's a lot of ways of doing the same thing, including ucarp,
> squid cache_peer as Amos said... It's just a matter of picking the one
> that fits.
> 
> On Tue, Nov 18, 2014 at 3:31 AM, Jason Haar  wrote:
> > On 18/11/14 16:07, Carlos Defoe wrote:
> >> As for my scenario, I also use wpad to configure some exceptions, some
> >> clients that will use a completely different proxy, etc...
> > Our "wpad.dat" is actually a PHP script which tests that the "official"
> > proxy (per client subnet) is actually working (with caching of the
> > results for performance reasons of course), if not it flicks them off to
> > another site's proxy server. Much better than trying to do dynamic DNS
> > tricks with a local HAproxy. ie if you have actually lost local Internet
> > access due to an ISP outage, HAproxy isn't going to help. But if WPAD
> > knows that a WAN-connected proxy is still working - why not point your
> > users at that instead
> >
> > We've been doing this for 10+ years, 99% of the time it's never needed,
> > but when it's needed, it works :-)
> >
> > --
> > Cheers
> >
> > Jason Haar
> > Corporate Information Security Manager, Trimble Navigation Ltd.
> > Phone: +1 408 481 8171
> > PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
> >
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

web servers providing pac/wpad dont need to be a single point of
failure, given that multiple instances of web servers can be behind a
load balancer, just like squid.  i have this arrangement, and get plenty
of reliability out of it.  it scales well too.

i have setup my VIP for the proxies in such a way that if you hit port
8080 you get load balanced to the pool with all members in it.  if you
hit the VIP on port 8081, you get load balanced to a pool with only the
first proxy in it, 8082 goes to the second proxy, etc.  this allows me
to test each proxy individually, and because the VIP name is the same,
the same kerberos ticket satisfies the auth requests.  at work, we have
F5s as well, and as a service check we attempt to GET some content we
host, and attempt to GET google or cnn.  the check requires that at
least one of the GETs succeed, in order to mark the device up.  i dont
have the external check in my HAProxy configs, but might have to look
into it.

as for my pac/wpad script, i have logic in it to send requests proxied
or unproxied, based on my design or security decisions.  i have logic
for direct access domains, direct access hosts, direct access networks,
proxied domains (forces the use of the proxy, overriding any other
logic), proxied hosts (again, override logic), and hosts that are forced
via a specific proxy by sending the request to a specific port on the
VIP.

the bulk of my access will be proxied, and i return the VIP on port 8080
as the primary proxy, and then ports 8081, 8082, etc as secondary,
tertiary, and so on.  that way the browser will always get all possible
avenues for access, should something be wrong with one or more of the
VIPs.  what i am not sure of is if HAProxy will reply with a RST when no
pool member(s) is/are available for a given VIP/pool.  we have this
setup at work on the F5s, and i'm not sure if i have it in HAProxy (or
if i can do it at all).

i would suggest that if you use a pac/wpad solution, you look into
pactester, which is a google summer of code project that executes pac
files and provides output indicating what actions would be returned to
the browser, given a URL.  so, with my setup if i call pactester and
give it http://www.google.com, it returns to me:

PROXY proxy.bpk2.com:8080; PROXY proxy.bpk2.com:8081; PROXY
proxy.bpk2.com:8082

Re: [squid-users] Centralized Squid - design and implementation

2014-11-16 Thread brendan kearney
Https is no issue.  The ssl session will persist to the same proxy for the
duration of the session.  I have no problems at all.
On Nov 16, 2014 3:58 PM, "alberto"  wrote:

> Ok, thank you very much. I think this is a good solution, maybe with an
> active/passive HAProxy with keepalived.
> Are you able to serve also https without any problem through HAProxy or
> only http request?
>
> regards,
> a.
>
>
>
> On Sun, Nov 16, 2014 at 8:00 PM, brendan kearney  wrote:
>
>> I use kerberos auth and do not have issues.  You have to pay attention to
>> the details with kerberos auth (dns name and principals need to match,
>> specific  options set in squid configs), but it is working very well for me
>> On Nov 16, 2014 12:32 PM, "alberto"  wrote:
>>
>>> Hi Brendan
>>>
>>> On Sun, Nov 16, 2014 at 5:51 PM, Brendan Kearney 
>>> wrote:
>>>
>>>> i use HAProxy to load balance based on the least number of connections
>>>>
>>>
>>> Do you use kerberos/AD authentication?
>>> Any issues with HAPROXY in front of the squid nodes?
>>>
>>> Thx,
>>> a.
>>>
>>>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-16 Thread brendan kearney
I use kerberos auth and do not have issues.  You have to pay attention to
the details with kerberos auth (dns name and principals need to match,
specific  options set in squid configs), but it is working very well for me
On Nov 16, 2014 12:32 PM, "alberto"  wrote:

> Hi Brendan
>
> On Sun, Nov 16, 2014 at 5:51 PM, Brendan Kearney  wrote:
>
>> i use HAProxy to load balance based on the least number of connections
>>
>
> Do you use kerberos/AD authentication?
> Any issues with HAPROXY in front of the squid nodes?
>
> Thx,
> a.
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-16 Thread Brendan Kearney
On Sun, 2014-11-16 at 17:22 +0100, Kinkie wrote:
> On Sun, Nov 16, 2014 at 4:54 PM, alberto  wrote:
> > Hello everyone,
> > first of all thanks to the community of squid for such a great job.
> 
> Hello Alberto,
> 
> [...]
> 
> > I have some questions that I would like to share with you:
> >
> > 1. I would like to leave the solution we are using now (wpad balancing). In
> > a situation like the one I have described, centralized squid serving the
> > spokes/branches, which is the best solution for clustering/HA? If one of the
> > centralized nodes had to "die" I would like client machines not to remain
> > "hanging" but to continue working on an active node without disruption. A
> > hierarchy of proxy would be the solution?
> 
> If you want to maximize the efficiency of your balancing solution, you
> probably want a slightly different approach: instead of using the
> client-ip as hashing mechanism, you want to hash on the destination
> host.
> e.g. have a pac-file like (untested, and to be adjusted):
> 
> function FindProxyForURL(url, host) {
>var dest_ip = dnsResolve(host);
>var dest_hash= dest_ip.slice(-1) % 2;
>if (dest_hash)
>  return "PROXY local_proxy1:port; PROXY local_proxy2:port; DIRECT";
>return "PROXY local_proxy2:port; PROXY local_proxy1:port; DIRECT"
> }
> This will balance by the final digit of the destination IP of the
> service. The downside is that it requires DNS lookups by the clients,
> and that if the primary local proxy fails, it takes a few seconds (up
> to 30) for clients to give up and fail over to secondary.
> 
> local_proxies can then either go direct to the origin server (if
> intranet) or use a balancing mechanism such as carp (see the
> documentation for the cache_peer directive in squid) to maximize
> efficiency, especially for Internet destinations.
> 
> The only single-point-of-failure at the HTTP level in this design is
> the PACfile server, it'll be up to you to make that reliable.
> 
> > 2. Bearing in mind that all users will be AD authenticated, which url
> > filtering/blacklist solution do you suggest?
> > In the past I have worked a lot with squidguard and dansguardian but now
> > they don't seem to be the state of the art anymore.
> > I've been thinking about two different solutions:
> >   2a. To use the native acl squid with the squidblacklist.org lists
> > (http://www.squidblacklist.org/)
> >   2b. To use urlfilterdb (http://www.urlfilterdb.com/products/overview.html)
> 
> I don't know, sorry.
> 
> > 3. Which GNU/Linux distro do you suggest me? I was thinking about Debian
> > Jessie (just frozen) or CentOS7.
> 
> http://wiki.squid-cache.org/BestOsForSquid
> 

i have all my squid instances (only 2 right now) share their caches:
cache_peer 192.168.25.1 sibling 31284827htcp=no-clr
and
cache_peer 192.168.50.1 sibling 31284827htcp=no-clr

which allows for anything cached to be served from local cache or a
sibling, instead of from the internet.  the likelihood of the sibling
cache being faster than the internet is high.

i use HAProxy to load balance based on the least number of connections
associated with the a pool member.  since i am sharing caches, i dont
need to pin a client or request to any particular proxy, at all or for
only a period of time.  with HAProxy, i only see a couple of seconds
interruption when one proxy goes offline.  generally this is trivial in
the end user experience.  i have it logging when instances go offline or
come back online, and the stats web interface is handy for quickly
checking status.

while i dont have any suggestions about which filtering option to use, i
will note that DansGuardian versions i have found are only HTTP/1.0
compliant, so you are likely losing gzip compression at the protocol
layer, and caching is likely affected, too.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: Problems with NTLM authentication

2014-10-07 Thread Brendan Kearney
On Tue, 2014-10-07 at 20:50 +0200, Marcel wrote:
> Hello,
> 
> I have some more information.
> 
> The problem seems to have nothing to do with samba, krb5 or anything
> else. I set up a new squid that isn't in the AD and doesn't use any
> kind of authentication at all.
> 
> 
> I have the exact same problem. Here is my POC squid.conf:
> 
> acl localnet src all
> http_access allow all
> http_port 3128
> 
> 
> 
> That is the entire configuration in my tests. As you can see, it is
> absolutely impossible for it to be a configuration issue.
> 
> Why can't I log on to a NTLM protected website with Internet Explorer
> when going over a squid proxy?
> 
> 
> It works fine in Firefox.
> 
> 
> 
> -- Forwarded message --
> From: foggle 
> Date: 7 October 2014 18:10
> Subject: [squid-users] Problems with NTLM authentication
> To: squid-users@lists.squid-cache.org
> 
> 
> Hello,
> 
> I have set up a squid Proxy that uses samba/ntlm/krb5 to do SSO AD
> authentication in the Company.
> 
> 
> This works fine.
> 
> My problem is that external Websites on the Internet that use NTLM
> authentication of their own do not work. My users enter their Details
> (DOMAIN\user and Password) and receive authentication failures
> Messages.
> 
> Interestingly enough, this (almost) only occurs in Internet Explorer.
> The
> same sites work fine with Firefox.
> 
> Thank you in advance for your much needed help.
> 
> 
> 
> --
> View this message in context:
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Problems-with-NTLM-authentication-tp4667742.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

not something that squid would be affecting, as squid has nothing to do
with the auth to the website.

Tools -> Internet Options -> Advanced tab: scroll down until you
Security.  Under Security, check the "Enable Integrated Windows
Authentication*" check box, and restart your browser.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users