Re: [squid-users] Need help setting up DD-WRT router to use Squid as a transparent proxy

2020-01-19 Thread Amos Jeffries
On 16/01/20 9:30 pm, Robert Marshall wrote:
> Hi all,
> 
> I'm trying to set up a transparent proxy on my network so that all
> devices are forced to use Squid/SquidGuard for network traffic, and can
> filter out undesirable destinations.
> 
> I have Squid/SquidGuard running on a Raspberry Pi 4, running the latest
> release of Raspian Buster. The route is a D-Link DIR-860L, flashed with
> the 01/14/20 build of DD-WRT. I tried using the instructions at DD-WRT.
> But, am running into problems.

What Instructions? If they are telling you to "port forward" or NAT
traffic to wards a separate Squid *machine* they are outdated and now wrong.



Note that traffic MUST only have the NAT performed on the Squid machine.
Any use of DNAT (aka port forwarding) results in the problem you are seeing.

Last time I setup devices like DD-WRT the UI only provided a "DMZ
server" option. If you cannot do Policy Routing for only the port 80 or
443 traffic on the DD-WRT device then the DMZ equivalent may be used
instead.
Either way you need correct and separate routing and NAT rules for
traffic arriving at the Squid machine to send the appropriate
connections to Squid and anything else to its proper destination.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help setting up DD-WRT router to use Squid as a transparent proxy

2020-01-16 Thread Rafael Akchurin
You can try policy based routing if DD-WRT supports that – see 
https://docs.diladele.com/tutorials/policy_based_routing_squid/index.html

From: squid-users  On Behalf Of 
Robert Marshall
Sent: Thursday, 16 January 2020 09:30
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Need help setting up DD-WRT router to use Squid as a 
transparent proxy

Hi all,

I'm trying to set up a transparent proxy on my network so that all devices are 
forced to use Squid/SquidGuard for network traffic, and can filter out 
undesirable destinations.

I have Squid/SquidGuard running on a Raspberry Pi 4, running the latest release 
of Raspian Buster. The route is a D-Link DIR-860L, flashed with the 01/14/20 
build of DD-WRT. I tried using the instructions at DD-WRT. But, am running into 
problems.

Squid/SquidGuard works fine if I enter in a manual proxy in my Firefox browser. 
However, when I go to configure my router's settings I have problems. The error 
message that's coming up says that I'm passing an invalid URL, and the only 
thing that shows in the error message is the /. This is happening on ALL 
webpages I try and go to, not just the ones which SquidGuard is set to filter 
out.

Helpful hints or directions will be greatly appreciated!

Robert
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help blocking an specific HTTPS website

2019-03-05 Thread Amos Jeffries
On 6/03/19 5:11 am, Felipe Arturo Polanco wrote:
> I confirm that, I can see TCP_DENIED requests on the access.log to
> web.whatsapp.com  but still the websites loads.
> 
> 1551192823.356     47 192.168.112.144 TCP_DENIED/403 4453 GET
> https://web.whatsapp.com/ws - HIER_NONE/- text/html
> 


Perhapse WhatsApp uses other protocols to get through when denied by the
proxy.

Have you tried blocking UDP port 80 and 443 (QUIC protocol) in your
firewall?

And of course ports 4244, 5222, 5223, 5228 and 5242.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help blocking an specific HTTPS website

2019-03-05 Thread Felipe Arturo Polanco
I confirm that, I can see TCP_DENIED requests on the access.log to
web.whatsapp.com but still the websites loads.

1551192823.356 47 192.168.112.144 TCP_DENIED/403 4453 GET
https://web.whatsapp.com/ws - HIER_NONE/- text/html

On Mon, Mar 4, 2019 at 7:21 PM Leonardo Rodrigues 
wrote:

> Em 04/03/2019 19:27, Felipe Arturo Polanco escreveu:
>
> Hi,
>
> I have been trying to block https://web.whatsapp.com/ from squid and I
> have been unable to.
>
> So far I have this:
>
> I can block other HTTPS websites fine
> I can block www.whatsapp.com fine
> I cannot block web.whatsapp.com
>
> I have HTTPS transparent interception enabled and I am bumping all TCP
> connections, but still this one doesn't appear to get blocked by squid.
>
> This is part of my configuration:
> ===
> acl blockwa1 url_regex whatsapp\.com$
> acl blockwa2 dstdomain .whatsapp.com
> acl blockwa3 ssl::server_name .whatsapp.com
> acl step1 at_step SslBump1
>
>
> blockwa1 and blockwa2 should definitely block web.whatsapp.com ..
> your rules seems right.
>
> Can you confirm the web.whatsapp.com access are getting through squid
> ? Are these accesses on your access.log with something different than
> DENIED status ?
>
>
>
> --
>
>
>   Atenciosamente / Sincerily,
>   Leonardo Rodrigues
>   Solutti Tecnologia
>   http://www.solutti.com.br
>
>   Minha armadilha de SPAM, NÃO mandem email
>   gertru...@solutti.com.br
>   My SPAMTRAP, do not email it
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help blocking an specific HTTPS website

2019-03-04 Thread Leonardo Rodrigues

Em 04/03/2019 19:27, Felipe Arturo Polanco escreveu:

Hi,

I have been trying to block https://web.whatsapp.com/ from squid and I 
have been unable to.


So far I have this:

I can block other HTTPS websites fine
I can block www.whatsapp.com  fine
I cannot block web.whatsapp.com 

I have HTTPS transparent interception enabled and I am bumping all TCP 
connections, but still this one doesn't appear to get blocked by squid.


This is part of my configuration:
===
acl blockwa1 url_regex whatsapp\.com$
acl blockwa2 dstdomain .whatsapp.com 
acl blockwa3 ssl::server_name .whatsapp.com 
acl step1 at_step SslBump1



    blockwa1 and blockwa2 should definitely block web.whatsapp.com .. 
your rules seems right.


    Can you confirm the web.whatsapp.com access are getting through 
squid ? Are these accesses on your access.log with something different 
than DENIED status ?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help about ICAP scan timeout/max file size for big files

2019-01-08 Thread Alex Rousskov
On 1/8/19 9:46 AM, i...@schroeffu.ch wrote:

> With ClamAV C-ICAP there is defined "MaxStreamSize 25M" as default,
> so after 25MB scanned by ICAP I can see with tcpdump on port 1344
> "ICAP/1.0 200 OK" from ICAP to Squid which triggers the browser to
> start the download. Thats what i want also for F-Secure ICAP.

The best solution would be for F-Secure to add support for (or enable in
your setup) "data trickling" or "patience pages". Any workarounds inside
Squid would be either nasty (e.g., timeouts, abandoned transactions,
etc.) or expensive (require Squid or eCAP/ICAP wrapper development).


> if their ICAP really is not sending "ICAP/1.0 200 OK" after X
> Seconds/MB, can I configure SQUID with a workaround?

You can try to specify a timeout via icap_io_timeout. Bugs
notwithstanding, Squid would terminate a connection to the ICAP service
that does not respond in X seconds. You may need to adjust
icap_service_failure_limit and/or icap_service_revival_delay to avoid
marking the affected ICAP service as "down" [too often]. Again, this is
not a proper solution and it may have negative side effects such as
memory leaks and unresponsive ICAP service. It may be worth trying while
you wait for F-Secure.

Unfortunately, the icap_io_timeout may not work if Squid is constantly
writing to the ICAP service (to deliver more virgin body bytes). Squid
should be treating each such write as an I/O, resetting the timeout.


You can also hack Squid to treat these cases specially. For example, you
could add adaptation_response_timeout or a similar directive that would
work like icap_io_timeout but ignore write activity. If you go down that
route, I suggest posting an RFC with new option description to squid-dev
as the first step.


You can even write an ICAP service (or eCAP adapter) that will add data
trickling or patience pages support to any ICAP service, but that is a
lot of development work!


> The header seems not include the file size. Here is an example of
> 100MB Virus File

Please note that you should test/analyze "real" transactions, not
requests for test files. If real transactions of interest usually lack
the Content-Length header, then timeout-based knobs are your best bet
(see above): There are no ACLs that can match accumulated response size
and, more importantly, there is no directive that would repeatedly
evaluate such ACLs as Squid accumulates the response body while waiting
for the ICAP response.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help about ICAP scan timeout/max file size for big files

2019-01-04 Thread Alex Rousskov
On 1/4/19 3:38 AM, i...@schroeffu.ch wrote:
> How can I configure the ICAP Service to truly let bigger files/longer
> scan times through the icap service marked as "clean"?

Which of the following questions are you asking?

1. How to configure Squid to never send huge files to your ICAP service?

2. How to configure your ICAP service to speed up huge-file decisions?

3. How to configure Squid to send huge files to your ICAP service
   without storing them in Squid memory or in Squid disk cache?

For all questions, do the huge files that you are dealing with have an
HTTP Content-Length response header?

And, if it is question #2, does your ICAP service support ICAP Preview
mode? Have you enabled ICAP previews in your Squid configuration?

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] need help with cachemgr

2018-11-28 Thread Alex Rousskov
On 11/28/18 4:59 AM, jmperrote wrote:

> This is the new error that I have when I tried to squidclint via https +
> certicicate.

You have many different problems.

Problem 0: You are not responding on the mailing list. Please keep this
thread on the mailing list so that others can benefit from this triage
and so that others can help you.

Problem 1: You seem to ignore errors and warnings that you can fix on
your own. Please fix the ones you can fix before asking for help with
the remaining problems. When asking for help, explain what you think
each remaining warning/error means, and why you cannot fix that problem.
This approach shows that you invest serious effort into making this work
rather than simply abusing the mailing list as a free replacement for a
system administrator.


Problem 2:

> squidclient -vvv --https --cert /soporte/ssl/educacion.crt -h 10.0.0.4 -p 
> 1084 mgr:info

The --cert option specifies a TLS client certificate. Your reverse
proxy, AFAICT, does not use client certificates. Remove that option. See
"man squidclient" for details about each option you use.


Problem 3:

> WARNING: Failed to load Certificate from /soporte/ssl/educacion.crt

I do not know what went wrong here because you have not provided any
relevant information like whether the file is actually there and can be
read by the user squidclient runs as.


Problem 4:

> X.509 TLS handshake ...
> VERIFY DATUM: The certificate is NOT trusted. The certificate issuer is
> unknown. The name in the certificate does not match the expected.
> WARNING: Insecure Connection

Looks self-explanatory to me: Your squidclient does not trust the server
certificate used by your reverse proxy. You may need to either

* use a --trusted-ca option or
* configure your TLS library environment to always trust the CA that
signed the https_port certificate of your reverse proxy.


Problem 5:

> HTTP/1.1 401 Unauthorized
> X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
> WWW-Authenticate: Basic realm=...

Your reverse proxy requires HTTP client authentication. Depending on
your needs, you should either

* adjust your Squid http_access rules to disable authentication for
cache manager requests or
* give a valid username and password to squidclient (search "man
squidclient" manual page for "authentication" and "WWW" to discover the
right options).


Potential problem 6:

This may not be relevant to you, but please note that Squid Cache
Manager does not yet support secure queries when Squid is running in SMP
mode. For details, please see
https://wiki.squid-cache.org/Features/CacheManager#Secure_SMP_reports


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] need help with cachemgr

2018-11-27 Thread Alex Rousskov
On 11/27/18 6:13 AM, jmperrote wrote:
> Hello need help with cachemgr, I using a squid as a reverse proxy mode,
> when I try to connect to retrive data using squidclient, connecting to
> cachemrg, only retrive squid OK, but the other values HIT, http request,
> etc,  that ara usually retrived are not retrived.
> 
> My case when I to try connect:
> 
> squidclient -vv cache_object://localhost/ mgr:info
> -p 1084

Place all named options before the anonymous ones and
use one (pseudo) URL at a time:

  squidclient -vv -p 1084 cache_object://localhost/
  squidclient -vv -p 1084 mgr:info

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help

2017-11-16 Thread Matus UHLAR - fantomas

On 16.11.17 09:42, Vayalpadu, Vedavyas wrote:

Nov 16 10:17:20 dkbavlpxpxy01 squid[91497]:
Failed to select source for 
'https://dkbavwpato02.global.internal.carlsberggroup.com/SES/services/masterdata/administratorServices-1.0.wsdl'

And customer is not able to connect to the application.

External App <-> Proxy <-> Internal application


have you played with always_direct and never_direct?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I'm not interested in your website anymore.
If you need cookies, bake them yourself.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help to solve problem with Squid 3.5.26 SSL Bump setting & iptables rules

2017-08-16 Thread Arsalan Hussain
Dear Eliezer

i had created new iptables configuration and it works fine for an hour
(attached)

both transparent proxy and with setting proxy clients accessing internet
through squid

but after every hour the service gets crash or unstable. and need to
restart squid and iptables services to work

i found the following error in access.log when service gets disturb. I
don't know the reason and such traffic what it is about and how to resolve
it. when we restart server, the services again start fine and internet
works.

1502858587.658 114260 192.168.2.162 TAG_NONE/503 0 CONNECT
dc.services.visualstudio.com:443 - HIER_NONE/- -
1502858587.658 114260 192.168.2.162 TAG_NONE/503 0 CONNECT
dc.services.visualstudio.com:443 - HIER_NONE/- -
1502858587.658 114258 192.168.5.1 TAG_NONE/503 0 CONNECT
update.googleapis.com:443 - HIER_NONE/- -
1502858587.658 114252 192.168.2.125 TAG_NONE/503 0 CONNECT
update.googleapis.com:443 - HIER_NONE/- -
1502858587.658 114256 192.168.2.188 TAG_NONE/503 0 CONNECT
en.wikibooks.org:443 - HIER_NONE/- -
1502858587.658 114256 192.168.2.188 TAG_NONE/503 0 CONNECT
en.wikibooks.org:443 - HIER_NONE/- -
1502858587.658 114256 192.168.2.188 TAG_NONE/503 0 CONNECT
en.wikibooks.org:443 - HIER_NONE/- -
1502858587.658 114256 192.168.2.188 TAG_NONE/503 0 CONNECT
en.wikibooks.org:443 - HIER_NONE/-



On Tue, Aug 1, 2017 at 5:17 PM, Eliezer Croitoru 
wrote:

> Hey,
>
> The iptables rules doesn't make any sense:
> IPTABLES SETTING
>
> # Generated by iptables-save v1.4.7 on Mon Jul 31 05:43:29 2017
> *filter
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [8330155:41635]
> -A INPUT -i eth1 -j ACCEPT
> -A INPUT -p tcp -m tcp --dport 3128 -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3129
> -A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130
> -A INPUT -j DROP
> COMMIT
> # Completed on Mon Jul 31 05:43:29 2017
>
> There is no PREROUTING in the filter table...
> Take a peek at:
> http://wiki.squid-cache.org/ConfigExamples/Intercept/
> LinuxRedirect#iptables_configuration
>
> and also I suggest you to use intercept ports such as:
> 13128 (for http, port 80)
> 13129 ( for https, port 443)
>
> And not port 3130.
>
> Let me know if it helps with something.
>
> Eliezer
>
> 
> http://ngtech.co.il/lmgtfy/
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Arsalan Hussain
> Sent: Tuesday, August 1, 2017 12:45
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Need help to solve problem with Squid 3.5.26 SSL
> Bump setting & iptables rules
>
> Dear all,
> i have configured squid 3.5.26 SSL bump on CENTOS 6.2 to share internet
> and delay pools to control bandwidth (my configuration files attached)
>
> Problem what i facing and not understanding the issue.
>
> 1- clients who send request-  proxy setting working fine with this
> directive http_port 3128
>  -  Delay pools working fine, internet browsing to all clients using proxy
> is working.
>
> 2- When transparent proxy clients sent http request via iptables ...
> REDIRECT.
> http_port 3129 intercept
> OR
> When transparent proxy clients sent https request via iptables ...
> REDIRECT.
> https_port 3130 intercept ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_certs/squid.pem
> I observed the problem in both cases when client sent request through
> IPTABLES Squid service got failed. When i stop iptables and start squid
> then it start working.
> -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3129
> -A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130
>
> 3-  my objective to setup squid.
>  *  Internet sharing to Proxy setting configured clients.
>  *  Internet sharing to Proxy Transparent clients (Those request
> directed to server from ip route 0.0.0.0 0.0.0.0 Proxy-IP from CISCO
> Network for HTTP and HTTPS Requests without configuring proxy setting
> (coming from wireless).
>  *  delay pools for HTTP and HTTPS both browsing for proxy &
> transparent clients.
>
>
> Kindly if somebody help me to fix my problems and if share any setting
> which works. I had added ssl bump certificate because the service was
> crashing again and again without any reason after a few days or sometime on
> same day.
>
>
>
> --
> With Regards,
>
> Arsalan Hussain
> If you don't fight for what you want, don't cry for what you lose.
>
>


-- 
With Regards,


*Arsalan Hussain*
*Assistant Director, Networks & Information System*

*PRESTON UNIVERSITY*
Add: Plot: 85, Street No: 3, Sector H-8/1, Islamabad, Pakistan
Cell: +92-322-5018611
UAN: (51) 111-707-808 (Ext: 443)
*Don't expect to see a change if you don't make one.*
# Generated by iptables-save v1.4.7 on Mon Apr 10 06:06:53 2017


*filter
:

Re: [squid-users] Need help to solve problem with Squid 3.5.26 SSL Bump setting & iptables rules

2017-08-01 Thread Eliezer Croitoru
Hey,

The iptables rules doesn't make any sense:
IPTABLES SETTING

# Generated by iptables-save v1.4.7 on Mon Jul 31 05:43:29 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [8330155:41635]
-A INPUT -i eth1 -j ACCEPT  
-A INPUT -p tcp -m tcp --dport 3128 -j ACCEPT
-A INPUT -i lo -j ACCEPT 
-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT 
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3129
-A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130
-A INPUT -j DROP
COMMIT
# Completed on Mon Jul 31 05:43:29 2017

There is no PREROUTING in the filter table...
Take a peek at:
http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect#iptables_configuration

and also I suggest you to use intercept ports such as:
13128 (for http, port 80)
13129 ( for https, port 443)

And not port 3130.

Let me know if it helps with something.

Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Arsalan Hussain
Sent: Tuesday, August 1, 2017 12:45
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Need help to solve problem with Squid 3.5.26 SSL Bump 
setting & iptables rules

Dear all,
i have configured squid 3.5.26 SSL bump on CENTOS 6.2 to share internet and 
delay pools to control bandwidth (my configuration files attached)

Problem what i facing and not understanding the issue.

1- clients who send request-  proxy setting working fine with this directive 
http_port 3128 
 -  Delay pools working fine, internet browsing to all clients using proxy is 
working.

2- When transparent proxy clients sent http request via iptables ... REDIRECT.
http_port 3129 intercept
OR
When transparent proxy clients sent https request via iptables ... REDIRECT.
https_port 3130 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_certs/squid.pem
I observed the problem in both cases when client sent request through IPTABLES 
Squid service got failed. When i stop iptables and start squid then it start 
working.
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3129
-A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130

3-  my objective to setup squid.
 *  Internet sharing to Proxy setting configured clients.
 *  Internet sharing to Proxy Transparent clients (Those request directed 
to server from ip route 0.0.0.0 0.0.0.0 Proxy-IP from CISCO Network for HTTP 
and HTTPS Requests without configuring proxy setting (coming from wireless).
 *  delay pools for HTTP and HTTPS both browsing for proxy & transparent 
clients.


Kindly if somebody help me to fix my problems and if share any setting which 
works. I had added ssl bump certificate because the service was crashing again 
and again without any reason after a few days or sometime on same day.



-- 
With Regards,

Arsalan Hussain
If you don't fight for what you want, don't cry for what you lose.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help with Squid on Windows

2016-04-23 Thread Rafael Akchurin
Hello Yuri and all,

I would then try the process monitor that will most probably give an answer why 
the perl exe helper does not start. 

Best regards,
Rafael

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri Voinov
Sent: Saturday, April 23, 2016 4:57 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Need help with Squid on Windows


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Raf,

I can confirm - perl helpers (I've tried to use store-ID with Active
Perl) does not work with Win version of Squid. I've tried to configure it 
several times. Without success. With the same symptoms.

WBR, Yuri

23.04.16 18:26, Rafael Akchurin пишет:
> Hello Jason, Amos, all,
>
> Possibly the issue can be related to Squid being compiled with Cygwin
https://cygwin.com/ml/cygwin/2012-03/msg00302.html, I'm not sure whether this 
issue with standard output has been fixed in the current Cygwin.
>
> One of the workarounds that could possibly work is to install Cygwin
and build squid yourself as described here:
http://docs.diladele.com/tutorials/build_squid_windows/index.html, this should 
start helpers from Cygwin terminal instead of cmd.
>
> Best regards,
> Rafael Akchurin
> Diladele B.V.
> http://www.quintolabs.com
> http://www.diladele.com
>
> --
> Please take a look at Web Safety - our ICAP based web filter server
for Squid proxy at http://www.diladele.com.
>
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
On Behalf Of Amos Jeffries
> Sent: Saturday, April 23, 2016 5:42 AM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Need help with Squid on Windows
>
> On 23/04/2016 3:09 a.m., Jason Spegal wrote:
>> Hello all,
>>
>> I need help with an issue that is now beyond me. I've installed squid
>> 3.5.16 (From: http://squid.diladele.com/) on my Windows 10 laptop, 
>> and I'm trying to enable the URL Rewrite helpers that I wrote in perl 
>> for my linux server. I've already done the necessary adjustments to 
>> them to make them work on windows. Running them directly seems to be 
>> fine, however when squid runs them they fail to execute. I've gotten 
>> as far as finding out perl is executing okay, however the script is 
>> not. I am unable to figure out how to debug it further. Running squid 
>> with full debugging (squid.exe -N -X -d 7) did not produce any 
>> significant information as to why they were not executing.
>
> You usually need to enable debugging in the helper itself to get that.
> Squid only knows that the helper died on startup.
>
>
>> My linux server is
>> running squid 3.5.11 and is not having any issues with the helpers.
>> The very first thing the helpers do is open a log file and write an 
>> initialization statement to it. I'm not seeing this when squid tried 
>> to execute it, so I'm fairly certain it has something to do with the 
>> execution of the script rather than a problem with the script itself.
>> I've also examined the permissions, and those should be good.
>>
>
> FWIW: Nothing has changed inside Squid for helpers between those two
releases.
>
>
>> Thanks in advance for the help.
>>
>> --Jason
>>
>>
>> squid.conf
>> ---
>> url_rewrite_program /cygdrive/c/strawberry/perl/bin/perl.exe
>> C:\Squid\etc\squid\filtered_sites\squidRed.pl
>> url_rewrite_children 100 startup=10 idle=1 concurrency=10 
>> url_rewrite_access allow all url_rewrite_bypass off
>>
>>
>> Also tried url_rewrite_program
>> /cygdrive/c/squid/etc/squid/filtered_sites/squidRed.pl
>>
>>
>> cache.log
>> ---
>> Squid Cache (Version 3.5.16): Terminated abnormally.
>
> Maybe something left over from this previous aborted Squid instance
and its helpers that kills the next one to start?
>
>> CPU Usage: 0.281 seconds = 0.078 user + 0.203 sys Maximum Resident
>> Size: 1371136 KB Page faults with physical i/o: 5488
>> 2016/04/22 08:53:06 kid1| Set Current Directory to 
>> /cygdrive/c/squid/var/cache/squid
>> 2016/04/22 08:53:06 kid1| Starting Squid Cache version 3.5.16 for 
>> x86_64-unknown-cygwin...
> ...
>> 2016/04/22 08:53:06 kid1| helperOpenServers: Starting 10/100 'perl.exe'
>> processes
> ...
>> 2016/04/22 08:53:06 kid1| WARNING: redirector #Hlpr6 exited
> ...
>> 2016/04/22 08:53:06 kid1| Too few redirector processes are running 
>> (need
>> 1/100)
> ...
>> FATAL: The redirector helpers are crashing too rapidly, need help!
>
> Note that it is only the 6th helpe

Re: [squid-users] Need help with Squid on Windows

2016-04-23 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Raf,

I can confirm - perl helpers (I've tried to use store-ID with Active
Perl) does not work with Win version of Squid. I've tried to configure
it several times. Without success. With the same symptoms.

WBR, Yuri

23.04.16 18:26, Rafael Akchurin пишет:
> Hello Jason, Amos, all,
>
> Possibly the issue can be related to Squid being compiled with Cygwin
https://cygwin.com/ml/cygwin/2012-03/msg00302.html, I'm not sure whether
this issue with standard output has been fixed in the current Cygwin.
>
> One of the workarounds that could possibly work is to install Cygwin
and build squid yourself as described here:
http://docs.diladele.com/tutorials/build_squid_windows/index.html, this
should start helpers from Cygwin terminal instead of cmd.
>
> Best regards,
> Rafael Akchurin
> Diladele B.V.
> http://www.quintolabs.com
> http://www.diladele.com
>
> --
> Please take a look at Web Safety - our ICAP based web filter server
for Squid proxy at http://www.diladele.com.
>
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
On Behalf Of Amos Jeffries
> Sent: Saturday, April 23, 2016 5:42 AM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Need help with Squid on Windows
>
> On 23/04/2016 3:09 a.m., Jason Spegal wrote:
>> Hello all,
>>
>> I need help with an issue that is now beyond me. I've installed squid
>> 3.5.16 (From: http://squid.diladele.com/) on my Windows 10 laptop, and
>> I'm trying to enable the URL Rewrite helpers that I wrote in perl for
>> my linux server. I've already done the necessary adjustments to them
>> to make them work on windows. Running them directly seems to be fine,
>> however when squid runs them they fail to execute. I've gotten as far
>> as finding out perl is executing okay, however the script is not. I am
>> unable to figure out how to debug it further. Running squid with full
>> debugging (squid.exe -N -X -d 7) did not produce any significant
>> information as to why they were not executing.
>
> You usually need to enable debugging in the helper itself to get that.
> Squid only knows that the helper died on startup.
>
>
>> My linux server is
>> running squid 3.5.11 and is not having any issues with the helpers.
>> The very first thing the helpers do is open a log file and write an
>> initialization statement to it. I'm not seeing this when squid tried
>> to execute it, so I'm fairly certain it has something to do with the
>> execution of the script rather than a problem with the script itself.
>> I've also examined the permissions, and those should be good.
>>
>
> FWIW: Nothing has changed inside Squid for helpers between those two
releases.
>
>
>> Thanks in advance for the help.
>>
>> --Jason
>>
>>
>> squid.conf
>> ---
>> url_rewrite_program /cygdrive/c/strawberry/perl/bin/perl.exe
>> C:\Squid\etc\squid\filtered_sites\squidRed.pl
>> url_rewrite_children 100 startup=10 idle=1 concurrency=10
>> url_rewrite_access allow all url_rewrite_bypass off
>>
>>
>> Also tried url_rewrite_program
>> /cygdrive/c/squid/etc/squid/filtered_sites/squidRed.pl
>>
>>
>> cache.log
>> ---
>> Squid Cache (Version 3.5.16): Terminated abnormally.
>
> Maybe something left over from this previous aborted Squid instance
and its helpers that kills the next one to start?
>
>> CPU Usage: 0.281 seconds = 0.078 user + 0.203 sys Maximum Resident
>> Size: 1371136 KB Page faults with physical i/o: 5488
>> 2016/04/22 08:53:06 kid1| Set Current Directory to
>> /cygdrive/c/squid/var/cache/squid
>> 2016/04/22 08:53:06 kid1| Starting Squid Cache version 3.5.16 for
>> x86_64-unknown-cygwin...
> ...
>> 2016/04/22 08:53:06 kid1| helperOpenServers: Starting 10/100 'perl.exe'
>> processes
> ...
>> 2016/04/22 08:53:06 kid1| WARNING: redirector #Hlpr6 exited
> ...
>> 2016/04/22 08:53:06 kid1| Too few redirector processes are running
>> (need
>> 1/100)
> ...
>> FATAL: The redirector helpers are crashing too rapidly, need help!
>
> Note that it is only the 6th helper that dies. The first 5 seem to be
okay at this point.
>
> Maybe something they share which has a limited connection count?
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users

Re: [squid-users] Need help with Squid on Windows

2016-04-23 Thread Rafael Akchurin
Hello Jason, Amos, all,

Possibly the issue can be related to Squid being compiled with Cygwin 
https://cygwin.com/ml/cygwin/2012-03/msg00302.html, I'm not sure whether this 
issue with standard output has been fixed in the current Cygwin.

One of the workarounds that could possibly work is to install Cygwin and build 
squid yourself as described here: 
http://docs.diladele.com/tutorials/build_squid_windows/index.html, this should 
start helpers from Cygwin terminal instead of cmd.

Best regards,
Rafael Akchurin
Diladele B.V. 
http://www.quintolabs.com 
http://www.diladele.com 

--
Please take a look at Web Safety - our ICAP based web filter server for Squid 
proxy at http://www.diladele.com.



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Saturday, April 23, 2016 5:42 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Need help with Squid on Windows

On 23/04/2016 3:09 a.m., Jason Spegal wrote:
> Hello all,
> 
> I need help with an issue that is now beyond me. I've installed squid
> 3.5.16 (From: http://squid.diladele.com/) on my Windows 10 laptop, and 
> I'm trying to enable the URL Rewrite helpers that I wrote in perl for 
> my linux server. I've already done the necessary adjustments to them 
> to make them work on windows. Running them directly seems to be fine, 
> however when squid runs them they fail to execute. I've gotten as far 
> as finding out perl is executing okay, however the script is not. I am 
> unable to figure out how to debug it further. Running squid with full 
> debugging (squid.exe -N -X -d 7) did not produce any significant 
> information as to why they were not executing.

You usually need to enable debugging in the helper itself to get that.
Squid only knows that the helper died on startup.


> My linux server is
> running squid 3.5.11 and is not having any issues with the helpers. 
> The very first thing the helpers do is open a log file and write an 
> initialization statement to it. I'm not seeing this when squid tried 
> to execute it, so I'm fairly certain it has something to do with the 
> execution of the script rather than a problem with the script itself.
> I've also examined the permissions, and those should be good.
> 

FWIW: Nothing has changed inside Squid for helpers between those two releases.


> Thanks in advance for the help.
> 
> --Jason
> 
> 
> squid.conf
> ---
> url_rewrite_program /cygdrive/c/strawberry/perl/bin/perl.exe
> C:\Squid\etc\squid\filtered_sites\squidRed.pl
> url_rewrite_children 100 startup=10 idle=1 concurrency=10 
> url_rewrite_access allow all url_rewrite_bypass off
> 
> 
> Also tried url_rewrite_program
> /cygdrive/c/squid/etc/squid/filtered_sites/squidRed.pl
> 
> 
> cache.log
> ---
> Squid Cache (Version 3.5.16): Terminated abnormally.

Maybe something left over from this previous aborted Squid instance and its 
helpers that kills the next one to start?

> CPU Usage: 0.281 seconds = 0.078 user + 0.203 sys Maximum Resident 
> Size: 1371136 KB Page faults with physical i/o: 5488
> 2016/04/22 08:53:06 kid1| Set Current Directory to 
> /cygdrive/c/squid/var/cache/squid
> 2016/04/22 08:53:06 kid1| Starting Squid Cache version 3.5.16 for 
> x86_64-unknown-cygwin...
...
> 2016/04/22 08:53:06 kid1| helperOpenServers: Starting 10/100 'perl.exe'
> processes
...
> 2016/04/22 08:53:06 kid1| WARNING: redirector #Hlpr6 exited
...
> 2016/04/22 08:53:06 kid1| Too few redirector processes are running 
> (need
> 1/100)
...
> FATAL: The redirector helpers are crashing too rapidly, need help!

Note that it is only the 6th helper that dies. The first 5 seem to be okay at 
this point.

Maybe something they share which has a limited connection count?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help with Squid on Windows

2016-04-22 Thread Amos Jeffries
On 23/04/2016 3:09 a.m., Jason Spegal wrote:
> Hello all,
> 
> I need help with an issue that is now beyond me. I've installed squid
> 3.5.16 (From: http://squid.diladele.com/) on my Windows 10 laptop, and
> I'm trying to enable the URL Rewrite helpers that I wrote in perl for my
> linux server. I've already done the necessary adjustments to them to
> make them work on windows. Running them directly seems to be fine,
> however when squid runs them they fail to execute. I've gotten as far as
> finding out perl is executing okay, however the script is not. I am
> unable to figure out how to debug it further. Running squid with full
> debugging (squid.exe -N -X -d 7) did not produce any significant
> information as to why they were not executing.

You usually need to enable debugging in the helper itself to get that.
Squid only knows that the helper died on startup.


> My linux server is
> running squid 3.5.11 and is not having any issues with the helpers. The
> very first thing the helpers do is open a log file and write an
> initialization statement to it. I'm not seeing this when squid tried to
> execute it, so I'm fairly certain it has something to do with the
> execution of the script rather than a problem with the script itself.
> I've also examined the permissions, and those should be good.
> 

FWIW: Nothing has changed inside Squid for helpers between those two
releases.


> Thanks in advance for the help.
> 
> --Jason
> 
> 
> squid.conf
> ---
> url_rewrite_program /cygdrive/c/strawberry/perl/bin/perl.exe
> C:\Squid\etc\squid\filtered_sites\squidRed.pl
> url_rewrite_children 100 startup=10 idle=1 concurrency=10
> url_rewrite_access allow all
> url_rewrite_bypass off
> 
> 
> Also tried url_rewrite_program
> /cygdrive/c/squid/etc/squid/filtered_sites/squidRed.pl
> 
> 
> cache.log
> ---
> Squid Cache (Version 3.5.16): Terminated abnormally.

Maybe something left over from this previous aborted Squid instance and
its helpers that kills the next one to start?

> CPU Usage: 0.281 seconds = 0.078 user + 0.203 sys
> Maximum Resident Size: 1371136 KB
> Page faults with physical i/o: 5488
> 2016/04/22 08:53:06 kid1| Set Current Directory to
> /cygdrive/c/squid/var/cache/squid
> 2016/04/22 08:53:06 kid1| Starting Squid Cache version 3.5.16 for
> x86_64-unknown-cygwin...
...
> 2016/04/22 08:53:06 kid1| helperOpenServers: Starting 10/100 'perl.exe'
> processes
...
> 2016/04/22 08:53:06 kid1| WARNING: redirector #Hlpr6 exited
...
> 2016/04/22 08:53:06 kid1| Too few redirector processes are running (need
> 1/100)
...
> FATAL: The redirector helpers are crashing too rapidly, need help!

Note that it is only the 6th helper that dies. The first 5 seem to be
okay at this point.

Maybe something they share which has a limited connection count?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] need help for using squid

2015-09-22 Thread HackXBack
please post your squid.conf



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/need-help-for-using-squid-tp4673338p4673341.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help debugging my squid configuration

2015-05-15 Thread Jose Torres-Berrocal
I willl try to find help on Ubuntu Forums how to compile it.

But I really would like to solve my squid 3.3.8 problem.  For which I
started this thread.
It may be bugy on SSL_BUMP but should work must of the time. It
compiles in Ubuntu as they do have an Ubuntu source for 3.3.8 version.

I need to find why is starting and terminated to fix the problem.  If
I succesfully compile 3.5.4 but the problem affects 3.5.4 also, then I
have accomplish nothing.

On Thu, May 14, 2015 at 12:34 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 14/05/2015 1:16 p.m., Jose Torres-Berrocal wrote:
 Source from squid repository does not come directly compatible with
 the OS.  Source from Ubuntu repository is made compatible to the OS.

 It has a folder call debian which has a file called rules which in
 term is used with the configure script for the compile options.

 What file or script do I have to change in the squid repository source
 to be able to use the debian/rules file?

 The debian/ folder contents for 3.3 are much different from the ones
 needed for other Squid versions.

 You are better off using the instructions at:
  http://wiki.squid-cache.org/KnowledgeBase/Ubuntu
 to compile and install a new Squid binary. Like this command line:
   ./configure (the options needed)  make install


 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help debugging my squid configuration

2015-05-13 Thread Jose Torres-Berrocal
As said I followed the thread I included in the initial email. I have
added the --enable-ssl and --with-open-ssl directives to the
compilation. The debug_option setting I have used you can see in the
squid.conf that I included. I tried with ALL, 9 also.  The logs do not
show any informative error.

I have run sudo squid3 -k parse but it does not return any error. I do
not know how to run it as proxy user (password is unknown).

How can I download the squid source you mention from Ubuntu repository?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help debugging my squid configuration

2015-05-13 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Latest Squid source's not in repositories. They are here:

http://www.squid-cache.org/Download/

13.05.15 19:53, Jose Torres-Berrocal пишет:
 As said I followed the thread I included in the initial email. I have
 added the --enable-ssl and --with-open-ssl directives to the
 compilation. The debug_option setting I have used you can see in the
 squid.conf that I included. I tried with ALL, 9 also.  The logs do not
 show any informative error.

 I have run sudo squid3 -k parse but it does not return any error. I do
 not know how to run it as proxy user (password is unknown).

 How can I download the squid source you mention from Ubuntu repository?
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVU1fnAAoJENNXIZxhPexGQ/kH+wS4Hr3JfDw3+r56RRay1dsi
musKEpxCFPx7ajml+XFkhdOla99q0syxKdnIatWeIkgzeHIDvb9lFXg9S/6mwK64
e2kWrIO4B2BCJkoC7lE7q7QZ5d4nG5CqG1tC6kh9iHneqLgTVGVZ71kaC3xiZyNb
GIiI/9H26XrnWFISTJ+/bmk4afKhx9gx5OPJMgGsGHu8LDfANdbqTnYOiQCS7180
7SkNkZBFsSPSv1qKWNdEwXXecpS6JTMFvkNA633YdF5xAWtTdkZMRl/8VzS9iJI/
pS7sty9bDhyQDUgcK2O+3oaXFzu89JSad2xmeDUsXjLBftAxLya53zhoXNRxvUg=
=t5Ej
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help debugging my squid configuration

2015-05-13 Thread Jose Torres-Berrocal
Source from squid repository does not come directly compatible with
the OS.  Source from Ubuntu repository is made compatible to the OS.

It has a folder call debian which has a file called rules which in
term is used with the configure script for the compile options.

What file or script do I have to change in the squid repository source
to be able to use the debian/rules file?

On Wed, May 13, 2015 at 9:55 AM, Yuri Voinov yvoi...@gmail.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Latest Squid source's not in repositories. They are here:

 http://www.squid-cache.org/Download/

 13.05.15 19:53, Jose Torres-Berrocal пишет:
 As said I followed the thread I included in the initial email. I have
 added the --enable-ssl and --with-open-ssl directives to the
 compilation. The debug_option setting I have used you can see in the
 squid.conf that I included. I tried with ALL, 9 also.  The logs do not
 show any informative error.

 I have run sudo squid3 -k parse but it does not return any error. I do
 not know how to run it as proxy user (password is unknown).

 How can I download the squid source you mention from Ubuntu repository?
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVU1fnAAoJENNXIZxhPexGQ/kH+wS4Hr3JfDw3+r56RRay1dsi
 musKEpxCFPx7ajml+XFkhdOla99q0syxKdnIatWeIkgzeHIDvb9lFXg9S/6mwK64
 e2kWrIO4B2BCJkoC7lE7q7QZ5d4nG5CqG1tC6kh9iHneqLgTVGVZ71kaC3xiZyNb
 GIiI/9H26XrnWFISTJ+/bmk4afKhx9gx5OPJMgGsGHu8LDfANdbqTnYOiQCS7180
 7SkNkZBFsSPSv1qKWNdEwXXecpS6JTMFvkNA633YdF5xAWtTdkZMRl/8VzS9iJI/
 pS7sty9bDhyQDUgcK2O+3oaXFzu89JSad2xmeDUsXjLBftAxLya53zhoXNRxvUg=
 =t5Ej
 -END PGP SIGNATURE-

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help debugging my squid configuration

2015-05-13 Thread Amos Jeffries
On 13/05/2015 3:25 a.m., Jose Torres-Berrocal wrote:
 Hello All.
 
 I am new to Squid.  I have compiled squid 3.3.8 on Lubuntu 14.04.1
 system following the thread
 http://ubuntuforums.org/showthread.php?t=68246
 
 I have also turn on --enabled-ssl-crtd on the compilation rules.

The option alone is not enough to enable SSL support. Your version also
requires --enable-ssl

However, what you are trying to do has not worked in that version for
some years. When you want to participate in the arms race that is
SSL-Bump you must use the latest of the latest versions of Squid. Today
that is squid-3.5.4 snapshot r18325 or later.


 
 When starting squid terminates but does not provide much error
 information that I could go on an try to fix.
 
 I tried starting the squid with -X option in the startup script and
 adding debug_options to the configuration file, but still does not get
 any valuable information.  Not at least for me as a newbe.

-X will spew out so much information the critical piece gets lost.

* Ensure your Squid is built with both the above mentioned ./configure
options.

* Run squid -k parse

* Run normally. cache.log will contain any critical messages.
 - in this case I suspect a mesage from the ssl_crtd heleper about
initializing its certificate database.

* If necessary add debugs_options ALL,1 to your squid.conf to get
non-critical but important messages as well.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] need help with ubuntu upgrade procedure

2014-03-25 Thread Marcus Kool

One way of doing this is to find the ubuntu spec file for the ubuntu package 
for Squid
and use the spec file to build a new squid 3.4.x package.  and then install the 
new package.
This way all files locations will remain the same and you can also use the 
package manager
to do an easy downgrade if necessary.

If you don't know how to build a package, you might consider to
- find the ubuntu spec file
- find all the parameters in the spec file that are used to call the 
./configure script and
  run ./configure with the parameters that you found
- run make and make install
This way you have the same benefit of installing everything in the same place.

Marcus


On 03/24/2014 09:02 PM, jeffrey j donovan wrote:

Greetings,
I'm running squid 3.3.8 on ubuntu 14.04 and I am having an issue filtering 
https with google groups.
this is my first time performing a squid install on ubuntu.
it appears that the repository version is stuck at 3.3.8 and the docs from 
their location are stuck on ubuntu 12.04.
So, I read the change logs and saw numerous ssl fixes between 3.3.8 - 3.4.x

How do I manually upgrade this package ?
http://www.ubuntuupdates.org/package/core/trusty/universe/base/squid

thanks for any assistance
-j



Re: [squid-users] need help with ubuntu upgrade procedure

2014-03-25 Thread jeffrey j donovan

On Mar 25, 2014, at 6:15 AM, Marcus Kool marcus.k...@urlfilterdb.com wrote:

 One way of doing this is to find the ubuntu spec file for the ubuntu package 
 for Squid
 and use the spec file to build a new squid 3.4.x package.  and then install 
 the new package.
 This way all files locations will remain the same and you can also use the 
 package manager
 to do an easy downgrade if necessary.
 
 If you don't know how to build a package, you might consider to
 - find the ubuntu spec file
 - find all the parameters in the spec file that are used to call the 
 ./configure script and
  run ./configure with the parameters that you found
 - run make and make install
 This way you have the same benefit of installing everything in the same place.
 
 Marcus

Thanks Marcus,
I just got 3.3.8 functioning correctly with qlproxy
I'll look into building my own package.
-j

RE: [squid-users] Need help on squid configuration with remote icap server

2013-12-16 Thread Rafael Akchurin
We use this python code to test the remotely working qlproxy. It may be of help 
to you too. Substitute the 127.0.0.1 with the IP of your choice.

#
# options_good.py
#
import os, socket

# configuration
server = 127.0.0.1
port   = 1344
request = OPTIONS icap://icap.server.net/sample-service ICAP/1.0
Host: icap.server.net
User-Agent: BazookaDotCom-ICAP-Client-Library/2.3



# code
def run():
# send request
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setblocking(1)
s.connect((server, port)) 
s.send(request.replace(\n, \r\n));

# read response
data = 
while 1:
d = s.recv(1024)
if not d: break
data += d

# analyze
print data

# entry point
run()




From: Eliezer Croitoru elie...@ngtech.co.il
Sent: Monday, December 16, 2013 3:07 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Need help on squid configuration with remote icap 
server

Hey Anil,

The first thing to start with is to use telnet or netcat to verify that
the ICAP service is working in the TCP level.
nc -v 192.168.10.9 1344
should also show some useful information on the basic connection status.
You can also try to run it on the same machine of the ICAP service and
from another IP and\or machine on the network.

The next thing is to try to understand if the service allows OPTIONS
requests or\and service outside the scope of the localhost(127.0.0.1).
It can be firewall level or service settings.

By the way what ICAP service are you using? c-icap ? Is it the basic
c-icap service?

I assume that if it works on the same machine fine the software should
provide the basic functions but it needs to be tested.
Squid first test for an OPTIONS icap request which is kind of a echo
ping test for the ICAP service state.

In a case you got into the level of tcpdump I would try to just see if
GreasySpoon works on the same topology and hosts:
https://github.com/jburnim/GreasySpoon

It is a nice ICAP service which actually works very well and is good for
testing purposes.
I have not used it in a production network but it shows how the protocol
implemented in a very good way that can be tested and learned.

The settings which you describe is a bit weird  but leave it for now.

All The Bests,
Eliezer

On 11/12/13 13:17, Anil Kapu wrote:
 Hi,

 I'm a new to squid and ICAP and requesting for help. I'm trying to
 setup a URL filtering feature provided by c-icap server. I'm having
 trouble configuring my Squid Server to communicate with the ICAP
 server setup on a remote machine. If I have ICAP server on same
 machine as Squid server(127.0.0.1), there is no issue in communication
 between squid and ICAP server. URL blocking also occurs successfully

 Following is the setup:
 I have setup Squid on 192.168.10.8 and ICAP server on 192.168.10.9, on
 192.168.10.8 in squid.conf file I have provided icap_service
 service_req reqmod_precache routing=on bypass=1 icap://icap server
 ip:1344/url_check_module.

 When I try to open any URL on the machine where squid is setup I get
 following error in squid log optional ICAP service is down after an
 options fetch failure: icap://192.168.10.9:1344/url_check_module
 [down,!opt] (I have setup my iptables to route all the http traffic
 to squid port 3128)

 I have attached my squid config file setting below

 Any help here is much appreciated
 Thanks
 Anil


Re: [squid-users] Need help on squid configuration with remote icap server

2013-12-16 Thread Anil Kapu
Hi Eliezer,

Thanks for a detailed troubleshooting steps. I nailed down on the
issue by using telnet. I had initially setup squid and c-icap on same
machine and had put iptable rules to divert all the http traffic to
pass squid's default port. Doing so I might have messed up the rules
which was blocking squid to communicate with icap server configured on
another machine.

Thanks for the info, I will use these steps in future
Best,
Anil

On Mon, Dec 16, 2013 at 7:37 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey Anil,

 The first thing to start with is to use telnet or netcat to verify that the
 ICAP service is working in the TCP level.
 nc -v 192.168.10.9 1344
 should also show some useful information on the basic connection status.
 You can also try to run it on the same machine of the ICAP service and from
 another IP and\or machine on the network.

 The next thing is to try to understand if the service allows OPTIONS
 requests or\and service outside the scope of the localhost(127.0.0.1).
 It can be firewall level or service settings.

 By the way what ICAP service are you using? c-icap ? Is it the basic c-icap
 service?

 I assume that if it works on the same machine fine the software should
 provide the basic functions but it needs to be tested.
 Squid first test for an OPTIONS icap request which is kind of a echo ping
 test for the ICAP service state.

 In a case you got into the level of tcpdump I would try to just see if
 GreasySpoon works on the same topology and hosts:
 https://github.com/jburnim/GreasySpoon

 It is a nice ICAP service which actually works very well and is good for
 testing purposes.
 I have not used it in a production network but it shows how the protocol
 implemented in a very good way that can be tested and learned.

 The settings which you describe is a bit weird  but leave it for now.

 All The Bests,
 Eliezer


 On 11/12/13 13:17, Anil Kapu wrote:

 Hi,

 I'm a new to squid and ICAP and requesting for help. I'm trying to
 setup a URL filtering feature provided by c-icap server. I'm having
 trouble configuring my Squid Server to communicate with the ICAP
 server setup on a remote machine. If I have ICAP server on same
 machine as Squid server(127.0.0.1), there is no issue in communication
 between squid and ICAP server. URL blocking also occurs successfully

 Following is the setup:
 I have setup Squid on 192.168.10.8 and ICAP server on 192.168.10.9, on
 192.168.10.8 in squid.conf file I have provided icap_service
 service_req reqmod_precache routing=on bypass=1 icap://icap server
 ip:1344/url_check_module.

 When I try to open any URL on the machine where squid is setup I get
 following error in squid log optional ICAP service is down after an
 options fetch failure: icap://192.168.10.9:1344/url_check_module
 [down,!opt] (I have setup my iptables to route all the http traffic
 to squid port 3128)

 I have attached my squid config file setting below

 Any help here is much appreciated
 Thanks
 Anil




Re: [squid-users] Need help on squid configuration with remote icap server

2013-12-16 Thread Anil Kapu
Hi Eliezer,

Thanks for a detailed troubleshooting steps. I nailed down on the
issue by using telnet. I had initially setup squid and c-icap on same
machine and had put iptable rules to divert all the http traffic to
pass squid's default port. Doing so I might have messed up the rules
which was blocking squid to communicate with icap server configured on
another machine.

I'm using C-ICAP.

Thanks for the info, I will use these steps in future
Best,
Anil


Re: [squid-users] Need help on squid configuration with remote icap server

2013-12-16 Thread Anil Kapu
Hi Rafael,

That would definitely be handy to test my environment setup in future.
Thanks for sharing it

Best
-Anil


On Mon, Dec 16, 2013 at 5:35 PM, Rafael Akchurin
rafael.akchu...@diladele.com wrote:
 We use this python code to test the remotely working qlproxy. It may be of 
 help to you too. Substitute the 127.0.0.1 with the IP of your choice.

 #
 # options_good.py
 #
 import os, socket

 # configuration
 server = 127.0.0.1
 port   = 1344
 request = OPTIONS icap://icap.server.net/sample-service ICAP/1.0
 Host: icap.server.net
 User-Agent: BazookaDotCom-ICAP-Client-Library/2.3

 

 # code
 def run():
 # send request
 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 s.setblocking(1)
 s.connect((server, port))
 s.send(request.replace(\n, \r\n));

 # read response
 data = 
 while 1:
 d = s.recv(1024)
 if not d: break
 data += d

 # analyze
 print data

 # entry point
 run()



 
 From: Eliezer Croitoru elie...@ngtech.co.il
 Sent: Monday, December 16, 2013 3:07 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Need help on squid configuration with remote icap 
 server

 Hey Anil,

 The first thing to start with is to use telnet or netcat to verify that
 the ICAP service is working in the TCP level.
 nc -v 192.168.10.9 1344
 should also show some useful information on the basic connection status.
 You can also try to run it on the same machine of the ICAP service and
 from another IP and\or machine on the network.

 The next thing is to try to understand if the service allows OPTIONS
 requests or\and service outside the scope of the localhost(127.0.0.1).
 It can be firewall level or service settings.

 By the way what ICAP service are you using? c-icap ? Is it the basic
 c-icap service?

 I assume that if it works on the same machine fine the software should
 provide the basic functions but it needs to be tested.
 Squid first test for an OPTIONS icap request which is kind of a echo
 ping test for the ICAP service state.

 In a case you got into the level of tcpdump I would try to just see if
 GreasySpoon works on the same topology and hosts:
 https://github.com/jburnim/GreasySpoon

 It is a nice ICAP service which actually works very well and is good for
 testing purposes.
 I have not used it in a production network but it shows how the protocol
 implemented in a very good way that can be tested and learned.

 The settings which you describe is a bit weird  but leave it for now.

 All The Bests,
 Eliezer

 On 11/12/13 13:17, Anil Kapu wrote:
 Hi,

 I'm a new to squid and ICAP and requesting for help. I'm trying to
 setup a URL filtering feature provided by c-icap server. I'm having
 trouble configuring my Squid Server to communicate with the ICAP
 server setup on a remote machine. If I have ICAP server on same
 machine as Squid server(127.0.0.1), there is no issue in communication
 between squid and ICAP server. URL blocking also occurs successfully

 Following is the setup:
 I have setup Squid on 192.168.10.8 and ICAP server on 192.168.10.9, on
 192.168.10.8 in squid.conf file I have provided icap_service
 service_req reqmod_precache routing=on bypass=1 icap://icap server
 ip:1344/url_check_module.

 When I try to open any URL on the machine where squid is setup I get
 following error in squid log optional ICAP service is down after an
 options fetch failure: icap://192.168.10.9:1344/url_check_module
 [down,!opt] (I have setup my iptables to route all the http traffic
 to squid port 3128)

 I have attached my squid config file setting below

 Any help here is much appreciated
 Thanks
 Anil


Re: [squid-users] Need help on squid configuration with remote icap server

2013-12-15 Thread Eliezer Croitoru

Hey Anil,

The first thing to start with is to use telnet or netcat to verify that 
the ICAP service is working in the TCP level.

nc -v 192.168.10.9 1344
should also show some useful information on the basic connection status.
You can also try to run it on the same machine of the ICAP service and 
from another IP and\or machine on the network.


The next thing is to try to understand if the service allows OPTIONS 
requests or\and service outside the scope of the localhost(127.0.0.1).

It can be firewall level or service settings.

By the way what ICAP service are you using? c-icap ? Is it the basic 
c-icap service?


I assume that if it works on the same machine fine the software should 
provide the basic functions but it needs to be tested.
Squid first test for an OPTIONS icap request which is kind of a echo 
ping test for the ICAP service state.


In a case you got into the level of tcpdump I would try to just see if 
GreasySpoon works on the same topology and hosts:

https://github.com/jburnim/GreasySpoon

It is a nice ICAP service which actually works very well and is good for 
testing purposes.
I have not used it in a production network but it shows how the protocol 
implemented in a very good way that can be tested and learned.


The settings which you describe is a bit weird  but leave it for now.

All The Bests,
Eliezer

On 11/12/13 13:17, Anil Kapu wrote:

Hi,

I'm a new to squid and ICAP and requesting for help. I'm trying to
setup a URL filtering feature provided by c-icap server. I'm having
trouble configuring my Squid Server to communicate with the ICAP
server setup on a remote machine. If I have ICAP server on same
machine as Squid server(127.0.0.1), there is no issue in communication
between squid and ICAP server. URL blocking also occurs successfully

Following is the setup:
I have setup Squid on 192.168.10.8 and ICAP server on 192.168.10.9, on
192.168.10.8 in squid.conf file I have provided icap_service
service_req reqmod_precache routing=on bypass=1 icap://icap server
ip:1344/url_check_module.

When I try to open any URL on the machine where squid is setup I get
following error in squid log optional ICAP service is down after an
options fetch failure: icap://192.168.10.9:1344/url_check_module
[down,!opt] (I have setup my iptables to route all the http traffic
to squid port 3128)

I have attached my squid config file setting below

Any help here is much appreciated
Thanks
Anil




Re: [squid-users] Need help on squid configuration with remote icap server

2013-12-11 Thread Alex Rousskov
On 12/11/2013 04:17 AM, Anil Kapu wrote:

 Following is the setup:
 I have setup Squid on 192.168.10.8 and ICAP server on 192.168.10.9, on
 192.168.10.8 in squid.conf file I have provided icap_service
 service_req reqmod_precache routing=on bypass=1 icap://icap server
 ip:1344/url_check_module.
 
 When I try to open any URL on the machine where squid is setup I get
 following error in squid log optional ICAP service is down after an
 options fetch failure: icap://192.168.10.9:1344/url_check_module
 [down,!opt] (I have setup my iptables to route all the http traffic
 to squid port 3128)

Your Squid cannot get an ICAP OPTIONS response from the ICAP server.
What happens when, on the Squid box, you do:

  $ ping 192.168.10.9

and

  $ telnet 192.168.10.9 1344


Thank you,

Alex.

 icap_enable on
 icap_service service_req reqmod_precache routing=on bypass=1
 icap://192.168.10.9:1344/url_check_module
 adaptation_access service_req allow all
 icap_service_failure_limit -1



Re: [squid-users] Need help on Squid Setup

2013-11-12 Thread Amos Jeffries
On 12/11/2013 8:19 p.m., Durga Prasath wrote:
 Hello All,
 
 I am trying to setup Squid Proxy for our internal users. we want to
 restrict access to only a few domains and URLs.
 
 the requirement i have is, i should allow
 https://www.google.co.in/search and other URLs should be banned. Like
 if users try to access https://www.google.co.in/blogsearch or
 https://www.google.co.in/imagesearch should be restricted and only
 /search should be allowed.
 
 The options url_regex or urlpath_regex are not working.
 
 Can someone help on this requirement on how to setup this using squid?

This is HTTPS traffic.

When it goes through a HTTP proxy it uses special CONNECT requests.
Those requests contain *only* the domain name and port (usually 443)
being connected to, and some headers related to what agent is requesting
the tunnel connection be setup. Path and other parts of the URL are not
available for access control to use.

To do what you want, you will have to hijack the HTTPS/SSL connection,
decrypt the users traffic, apply your controls, then re-encrypt. Squid
can do that with the SSL-bump feature, BUT before using it please check
with your local lawyer - using it is considered illegal wiretapping
and/or breach of privacy in many countries.

Amos


Re: [squid-users] Need help on Squid Setup

2013-11-12 Thread Durga Prasath
Thanks for your email amos. is there any other way that we can get
this done other than SSL_bump. any URL redirector program can help
us... ( I did check here and usage of ssl_bump is illegal.)


Thanks and Regards,
Durga Prasath



On Tue, Nov 12, 2013 at 1:35 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 12/11/2013 8:19 p.m., Durga Prasath wrote:
 Hello All,

 I am trying to setup Squid Proxy for our internal users. we want to
 restrict access to only a few domains and URLs.

 the requirement i have is, i should allow
 https://www.google.co.in/search and other URLs should be banned. Like
 if users try to access https://www.google.co.in/blogsearch or
 https://www.google.co.in/imagesearch should be restricted and only
 /search should be allowed.

 The options url_regex or urlpath_regex are not working.

 Can someone help on this requirement on how to setup this using squid?

 This is HTTPS traffic.

 When it goes through a HTTP proxy it uses special CONNECT requests.
 Those requests contain *only* the domain name and port (usually 443)
 being connected to, and some headers related to what agent is requesting
 the tunnel connection be setup. Path and other parts of the URL are not
 available for access control to use.

 To do what you want, you will have to hijack the HTTPS/SSL connection,
 decrypt the users traffic, apply your controls, then re-encrypt. Squid
 can do that with the SSL-bump feature, BUT before using it please check
 with your local lawyer - using it is considered illegal wiretapping
 and/or breach of privacy in many countries.

 Amos


Re: [squid-users] Need help on Squid Setup

2013-11-12 Thread Amos Jeffries
On 13/11/2013 6:21 p.m., Durga Prasath wrote:
 Thanks for your email amos. is there any other way that we can get
 this done other than SSL_bump. any URL redirector program can help
 us... ( I did check here and usage of ssl_bump is illegal.)

Unfortunately no, that is the only way.

Amos



Re: [squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu

2013-08-28 Thread Eliezer Croitoru
 178.173.12.70 TCP_MISS/503 4224 GET
 http://www.netcontractor.pl/favicon.ico - HIER_DIRECT/78.46.37.186 text/html
 1377506618.709  60835 178.173.12.70 TCP_MISS/503 4199 GET
 http://etutorials.org/favicon.ico - HIER_DIRECT/195.234.5.139 text/html
 1377506618.709  61011 178.173.12.70 TCP_MISS/503 4420 GET
 http://www.packtpub.com/favicon.ico - HIER_DIRECT/83.166.169.231 text/html
 1377506620.529  60830 178.173.12.70 TCP_MISS/503 4223 GET
 http://www.thegeekstuff.com/favicon.ico - HIER_DIRECT/192.254.201.75
 text/html
 1377506620.529  60659 178.173.12.70 TCP_MISS/503 4053 GET
 http://www.web-polygraph.org/favicon.ico - HIER_DIRECT/209.169.10.130
 text/html
 1377506620.530  60829 178.173.12.70 TCP_MISS/503 4099 GET
 http://ubuntuforums.org/favicon.ico - HIER_DIRECT/91.189.94.12 text/html
 1377506622.740 240843 178.173.12.70 TCP_MISS/503 4964 GET
 http://code.google.com/p/shellinabox/ - HIER_DIRECT/74.125.236.164 text/html
 1377506624.743  61038 178.173.12.70 TCP_MISS/503 4150 GET
 http://www.tucny.com/favicon.ico - HIER_DIRECT/74.125.135.121 text/html
 1377506625.548 240492 178.173.12.70 TCP_MISS/503 4263 GET
 http://gravatar.com/avatar/33be8eebf9ff1375eecabb6d45bb84f0/? -
 HIER_DIRECT/72.233.69.5 text/html
 1377506625.744 240688 178.173.12.70 TCP_MISS/503 4263 GET
 http://gravatar.com/avatar/10c08133f930b023f8a29f7aca903ade/? -
 HIER_DIRECT/72.233.69.4 text/html
 1377506625.744 240687 178.173.12.70 TCP_MISS/503 4263 GET
 http://gravatar.com/avatar/bbafaf9e10ccbeadb05132f0907eef62/? -
 HIER_DIRECT/72.233.69.4 text/html
 1377506629.328  59995 178.173.12.70 TCP_MISS_ABORTED/000 0 GET
 http://um16.eset.com/eset_eval/update.ver - HIER_DIRECT/93.184.71.10 -
 1377506633.748 240973 178.173.12.70 TCP_MISS/503 7081 GET
 http://cisco.112.2o7.net/b/ss/cisco-us,cisco-usprodswitches/1/H.24.3/s641795
 77133309? - HIER_DIRECT/66.235.132.232 text/html
 1377506674.091  0 :: TCP_DENIED/403 3788 GET
 http://backend-kid2:4002/squid-internal-periodic/store_digest - HIER_NONE/-
 text/html
 1377506675.522  59980 178.173.12.70 TCP_MISS/503 4048 GET
 http://wiki.squid-cache.org/favicon.ico - HIER_DIRECT/77.93.254.178
 text/html
 1377506680.531  59983 178.173.12.70 TCP_MISS/503 4053 GET
 http://www.web-polygraph.org/favicon.ico - HIER_DIRECT/209.169.10.130
 text/html
 1377506687.797  61064 178.173.12.70 TCP_MISS/503 4920 GET
 http://beacon-1.newrelic.com/1/c7e812077e? - HIER_DIRECT/50.31.164.168
 text/html
 1377506690.518  61188 178.173.12.70 TCP_MISS/503 4163 GET
 http://um16.eset.com/eset_eval/update.ver - HIER_DIRECT/93.184.71.10
 text/html
 1377506734.092  0 :: TCP_DENIED/403 3788 GET
 http://backend-kid3:4003/squid-internal-periodic/store_digest - HIER_NONE/-
 text/html
 1377506740.804 180166 178.173.12.70 TCP_MISS/503 4044 GET
 http://packages.debian.org/favicon.ico - HIER_DIRECT/82.195.75.113 text/html
 1377506863.961 241103 178.173.12.70 TCP_MISS/503 4951 GET
 http://code.google.com/favicon.ico - HIER_DIRECT/74.125.236.166 text/html
 ##
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Wednesday, August 28, 2013 9:55 AM
 To: Mohsen Dehghani
 Subject: Re: [squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu
 
 On 24/08/2013 6:26 p.m., Mohsen Dehghani wrote:
 Thanks
 But my bandwidth is gonna to be extended to 2Gbps. Are workers still 
 perform better than multi  instance?
 
 I'm not sure of the answer to that one sorry. You are in a quite select
 group at present dealing with Gbps traffic rates.
 (If you understand Eliezers response earlier it sounds good thoguh I'm not
 sure I udnerstand the specifics myself yet).
 
 Amos
 
 



RE: [squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu

2013-08-27 Thread Mohsen Dehghani
/shellinabox/ - HIER_DIRECT/74.125.236.164 text/html
1377506624.743  61038 178.173.12.70 TCP_MISS/503 4150 GET
http://www.tucny.com/favicon.ico - HIER_DIRECT/74.125.135.121 text/html
1377506625.548 240492 178.173.12.70 TCP_MISS/503 4263 GET
http://gravatar.com/avatar/33be8eebf9ff1375eecabb6d45bb84f0/? -
HIER_DIRECT/72.233.69.5 text/html
1377506625.744 240688 178.173.12.70 TCP_MISS/503 4263 GET
http://gravatar.com/avatar/10c08133f930b023f8a29f7aca903ade/? -
HIER_DIRECT/72.233.69.4 text/html
1377506625.744 240687 178.173.12.70 TCP_MISS/503 4263 GET
http://gravatar.com/avatar/bbafaf9e10ccbeadb05132f0907eef62/? -
HIER_DIRECT/72.233.69.4 text/html
1377506629.328  59995 178.173.12.70 TCP_MISS_ABORTED/000 0 GET
http://um16.eset.com/eset_eval/update.ver - HIER_DIRECT/93.184.71.10 -
1377506633.748 240973 178.173.12.70 TCP_MISS/503 7081 GET
http://cisco.112.2o7.net/b/ss/cisco-us,cisco-usprodswitches/1/H.24.3/s641795
77133309? - HIER_DIRECT/66.235.132.232 text/html
1377506674.091  0 :: TCP_DENIED/403 3788 GET
http://backend-kid2:4002/squid-internal-periodic/store_digest - HIER_NONE/-
text/html
1377506675.522  59980 178.173.12.70 TCP_MISS/503 4048 GET
http://wiki.squid-cache.org/favicon.ico - HIER_DIRECT/77.93.254.178
text/html
1377506680.531  59983 178.173.12.70 TCP_MISS/503 4053 GET
http://www.web-polygraph.org/favicon.ico - HIER_DIRECT/209.169.10.130
text/html
1377506687.797  61064 178.173.12.70 TCP_MISS/503 4920 GET
http://beacon-1.newrelic.com/1/c7e812077e? - HIER_DIRECT/50.31.164.168
text/html
1377506690.518  61188 178.173.12.70 TCP_MISS/503 4163 GET
http://um16.eset.com/eset_eval/update.ver - HIER_DIRECT/93.184.71.10
text/html
1377506734.092  0 :: TCP_DENIED/403 3788 GET
http://backend-kid3:4003/squid-internal-periodic/store_digest - HIER_NONE/-
text/html
1377506740.804 180166 178.173.12.70 TCP_MISS/503 4044 GET
http://packages.debian.org/favicon.ico - HIER_DIRECT/82.195.75.113 text/html
1377506863.961 241103 178.173.12.70 TCP_MISS/503 4951 GET
http://code.google.com/favicon.ico - HIER_DIRECT/74.125.236.166 text/html
##

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 28, 2013 9:55 AM
To: Mohsen Dehghani
Subject: Re: [squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu

On 24/08/2013 6:26 p.m., Mohsen Dehghani wrote:
 Thanks
 But my bandwidth is gonna to be extended to 2Gbps. Are workers still 
 perform better than multi  instance?

I'm not sure of the answer to that one sorry. You are in a quite select
group at present dealing with Gbps traffic rates.
(If you understand Eliezers response earlier it sounds good thoguh I'm not
sure I udnerstand the specifics myself yet).

Amos




Re: [squid-users] [NEED HELP] TPROXY + L2 WCCP + multi cpu

2013-08-21 Thread Amos Jeffries

On 21/08/2013 1:17 a.m., Mohsen Dehghani wrote:

Hi team

I have already implemented tproxy + L2 wccp and it works perfectly except
one: squid just uses one cpu(core) and other cores on a DELL R710 are
wasted.
I have about 140 Mbps traffic and it utilizes 50% of one core. When decided
to run multicpu squid using this help:

http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem

I noticed that the backend receives the requests with the ip address of
frontend(127.0.0.1).
As my squid machine do not have any public ip ( I just used tproxy before )
so it cannot get the request and forward it to the frontend. It means the
backend does not spoof the client ip.

My question is how can I force the backend to use the client ip address to
get request from internet servers?

My squid version is 3.3.8
My machine does not have any public IP


With 3.3 series you are likely to find 
http://wiki.squid-cache.org/Features/SmpScale workers are better than 
separate Squid instances. The config file is far simpler and being a 
single layer the TPROXY relay issue is not present.



In theory you can pass TPROXY details through two layers by using the 
no-tproxy option on the front layers cache_peer line, 
follow_x_forwarded_for allow localhost on the backend layer. It may 
also require tproxy http_port option on the backend layer to handle 
setup of the outgoing spoofing properly.
 Just theorizing here, if anyone wants to try it please inform us on 
how it goes :-) It will definitely fail unless both layers are on the 
same box, otherwise it should work.


Amos


Re: [squid-users] Need help with squid snmp with PRTG Monitor MIB and Oidlib

2013-08-19 Thread Amos Jeffries

Looks like nobody who read your post knows.

All we can do is provide detais of what each Squid OID presents, how you 
configure those OID into your monitoring software is something more for 
the PRTG or Oidlib help forums/goups/mailing lists.


Amos



Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-22 Thread a...@imaginers.org
Dear All!
I've also a problem running ssl-bump with an intermediate CA using a signed
certificate from a CA.
My setup is as follows:
squid-3.3.3-20130418-r12525 with
- https_port 3130 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid33/ssl_cert/server.pem
key=/etc/squid33/ssl_cert/key.pem
- ssl_bump server-first all
- sslproxy_cert_error allow all
- sslproxy_cert_adapt setCommonName ssl::certDomainMismatch
following the rules http://wiki.squid-cache.org/Features/MimicSslServerCert

This is working fine when using my self generated CA for signing the requests,
however I want to get rid of the browser warning so I try to use a CA already
recognized in the browser, what should be possible following this ticket:
http://bugs.squid-cache.org/show_bug.cgi?id=3426 (already mentioned)

But no matter what I do I can't get rid of the browser warning. If I use a self
signed root CA or certificate squid detects it is self signed and does not
append any intermediate CA or other chain.
If I generate an csr and send it to a CA I get back an .crt and an
intermediate-bundle, pack them up with the key in a single .pem file and restart
squid - then a chain is displayed in the browser but now with one 'cert' to much
(imho) and marked as invalid. Firefox reports sec_error_unknown_issuer, safari
says invalid chain length

For example in the browser details it looks like this:
RootCA (which is marked fine by the browser) - Intermediate CA (marked invalid)
- Certificate signed and created by the csr (marked invalid) - fake
certificate created by squid for the requested site (marked invalid)

If anyone has a running setup without importing the self-signed CA to all
browsers please let me know.

Thanks for any feedback,
Alex


Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-22 Thread Alex Rousskov
On 04/22/2013 10:36 AM, a...@imaginers.org wrote:


 This is working fine when using my self generated CA for signing the requests

Let's call this CA selfCA.


 I want to get rid of the browser warning so I try to use a CA already
 recognized in the browser, what should be possible following this ticket:
 http://bugs.squid-cache.org/show_bug.cgi?id=3426 (already mentioned)

You may have misinterpreted what that bug report says. The reporter
placed his selfCA into the browser. The reporter did not use a CA
certificate from a well-known CA root in his signing chain -- it is not
possible to do that because you do not have the private key from that
well-known root CA certificate.

You should use selfCA as root CA of your signing chain and you have to
place that selfCA in the browser.


 If anyone has a running setup without importing the self-signed CA to all
 browsers please let me know.

It is not possible to bump traffic without importing your self-signed
root CA into all browsers. If it were possible, SSL would have been useless.


HTH,

Alex.



Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-15 Thread Alex Rousskov
On 04/14/2013 11:44 PM, Prasanna Venkateswaran wrote:

 Can someone please help me out here? In a nutshell, I am using a
 proper signed certificate(not self signed) to generate certificates.
 The chain is my certificate - intermediate CA - root CA. 

The root certificate is still a fake self-signed certificate though,
right? All root certificates are self-signed, of course. I just want to
double check that you are not using a true certificate from a
well-known root CA in your chain because in 99.99% of SslBump cases we
see around here that would not work.


 I cannot
 make squid send the entire certificate chain to the clients and this
 is breaking many applications in our network.

FWIW, you do not really want to send the entire chain. The root
certificate needs to be installed by clients. Squid needs to send the
configured intermediate and the generated leaf certificates. There is no
point in sending the root certificate because a client would not be able
to validate it unless it already has that root certificate installed!

Does your Squid send the configured intermediate certificate with the
generated leaf one? We have added some code to make that work and that
code should be in v3.3 you are using. Here is the corresponding commit
message with more details about Squid logic used to send the
intermediate certificate:

 revno: 11820
 committer: Christos Tsantilas chtsa...@users.sourceforge.net
 branch nick: trunk
 timestamp: Thu 2011-10-27 18:27:25 +0300
 message:
   sslBump: Send intermediate CA
   
   SslBump code assumed that it is signing generated certificates with a root 
 CA
   certificate. Root certificates are usually not sent along with the server
   certificates because clients must have them independently installed or
   built-in. Squid was not sending the signing certificate.
   
   In many environments, Squid signing certificate is intermediate (i.e., it
   belongs to a non-root CA). If Squid does not send that intermediate signing
   certificate with the generated one, the client will not be able to 
 establish a
   complete chain of trust from the generated fake to the root CA certificate,
   leading to errors.
   
   With this change, Squid may send the signing certificate (along with the
   generated one) using the following rules:
   
  * If the configured signing certificate is self-signed,
then just send the generated certificate alone.
Note that root CA certificates are self-signed (by root CA).
   
  * Otherwise (i.e., if the configured signing certificate is an 
 intermediate
CA certificate), send both the intermediate CA and the generated fake
certificate.
   
  * If Squid sends the intermediate CA certificate, Squid also sends
all other certificates from the cert= file, Sending a chain with
multiple intermediate CA certificates may be required when the Squid
signing certificate was signed by another intermediate CA.
   
   
   This is a Measurement Factory Project


Is your configured signing certificate self-signed or intermediate? Does
Squid send it along with the generated fake certificate?


HTH,

Alex.



 On 4/11/13, Prasanna Venkateswaran prasca...@gmail.com wrote:
 Hi Guy,
  We want to be a man-in-the middle but we want to get the
 approval from clients/end-users out of band by accepting the terms and
 conditions. The self signed certificates is sort of ok with browsers.
 But many other applications like dropbox sync, AV dat update, vpn ,
 etc fail because of the untrusted certificate. On top of it we have
 some headless devices in our network as well. Since we anyway have
 this information in our terms and conditions we would like to move to
 a trusted chain so that all the applications work as expected..

 Gentlemen,
   I see some users have already asked help/reported bug about the
 same thing like,
 http://www.squid-cache.org/mail-archive/squid-users/201112/0197.html.

   I also see that changes have been done in squid to support this
 behavior as well.
 http://www.squid-cache.org/mail-archive/squid-dev/201110/0207.html

  I followed the steps from this thread for configuration and I
 still dont see the chain information sent to the clients.
 http://www.squid-cache.org/mail-archive/squid-users/201109/0037.html

   So has the behavior of squid changed in recent times? Or am I
 missing something in my configuration. How to make squid send the
 entire certificate chain to clients? Please help.

 Regards,
 Prasanna




Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-14 Thread Prasanna Venkateswaran
Hi,
Can someone please help me out here? In a nutshell, I am using a
proper signed certificate(not self signed) to generate certificates.
The chain is my certificate - intermediate CA - root CA. I cannot
make squid send the entire certificate chain to the clients and this
is breaking many applications in our network.

 I am using squid 3.3.1. Please help.

Regards,
Prasanna

On 4/11/13, Prasanna Venkateswaran prasca...@gmail.com wrote:
 Hi Guy,
  We want to be a man-in-the middle but we want to get the
 approval from clients/end-users out of band by accepting the terms and
 conditions. The self signed certificates is sort of ok with browsers.
 But many other applications like dropbox sync, AV dat update, vpn ,
 etc fail because of the untrusted certificate. On top of it we have
 some headless devices in our network as well. Since we anyway have
 this information in our terms and conditions we would like to move to
 a trusted chain so that all the applications work as expected..

 Gentlemen,
   I see some users have already asked help/reported bug about the
 same thing like,
 http://www.squid-cache.org/mail-archive/squid-users/201112/0197.html.

   I also see that changes have been done in squid to support this
 behavior as well.
 http://www.squid-cache.org/mail-archive/squid-dev/201110/0207.html

  I followed the steps from this thread for configuration and I
 still dont see the chain information sent to the clients.
 http://www.squid-cache.org/mail-archive/squid-users/201109/0037.html

   So has the behavior of squid changed in recent times? Or am I
 missing something in my configuration. How to make squid send the
 entire certificate chain to clients? Please help.

 Regards,
 Prasanna



Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-11 Thread Prasanna Venkateswaran
Hi Guy,
 We want to be a man-in-the middle but we want to get the
approval from clients/end-users out of band by accepting the terms and
conditions. The self signed certificates is sort of ok with browsers.
But many other applications like dropbox sync, AV dat update, vpn ,
etc fail because of the untrusted certificate. On top of it we have
some headless devices in our network as well. Since we anyway have
this information in our terms and conditions we would like to move to
a trusted chain so that all the applications work as expected..

Gentlemen,
  I see some users have already asked help/reported bug about the
same thing like,
http://www.squid-cache.org/mail-archive/squid-users/201112/0197.html.

  I also see that changes have been done in squid to support this
behavior as well.
http://www.squid-cache.org/mail-archive/squid-dev/201110/0207.html

 I followed the steps from this thread for configuration and I
still dont see the chain information sent to the clients.
http://www.squid-cache.org/mail-archive/squid-users/201109/0037.html

  So has the behavior of squid changed in recent times? Or am I
missing something in my configuration. How to make squid send the
entire certificate chain to clients? Please help.

Regards,
Prasanna


Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-10 Thread Guy Helmer

On Apr 10, 2013, at 10:05 AM, Prasanna Venkateswaran prasca...@gmail.com 
wrote:

 Hi,
I spent more time on this today by looking at the code. I see from
 the code that squid does not accept certificates which require
 passphrase to read the private key.
 
 In the function readSslPrivateKey(...), I see this
 EVP_PKEY *pkey = PEM_read_bio_PrivateKey(bio.get(), NULL,
 passwd_callback, NULL);
 
   The passphrase argument is NULL. The certificate file I was
 using requires a passphrase to read the keys while the self signed
 certificate does not require it and hence it was working.
 
   Am I right in my understanding? Is this the way squid is
 designed to work or is this a bug?
 

Even if you could load this private key, I would not expect the matching 
certificate to be able to sign other certificates in support of 
man-in-the-middle decryption. Otherwise anyone could perform man-in-the-middle 
decryption with dynamically generated certificates that clients would trust.

To implement dynamic certificate generation, you need your own certificate 
capable of signing other certificates, and then your clients have to be 
configured to trust that certificate. It sounds like you had previously 
implemented that approach.

Guy


 
 On 4/9/13, Prasanna Venkateswaran prasca...@gmail.com wrote:
 Hi,
 I am using squid 3.3.1 to enable the dynamic certificate
 generation functionality and it works fine with a self signed
 certificate. I now have a actual signed certificate and the ssl chain
 is such that my certificate - CA1 - Root CA.
 
 I cleared the previous cert db directory and re initilaized it. I
 then created a cert.chain file in the format mentioned below.
 
 -BEGIN CERTIFICATE-
 public key of my certificate 
 -END CERTIFICATE-
 -BEGIN RSA PRIVATE KEY-
  my private key 
 -END RSA PRIVATE KEY-
 -BEGIN CERTIFICATE-
 public key of CA1 
 -END CERTIFICATE-
 -BEGIN CERTIFICATE-
 public key of Root CA 
 -END CERTIFICATE-
 
 squid.conf:
 https_port 3129 intercept generate-host-certificates=on
 dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/cert.chain
 ssl-bump
 
But when i start squid , i get the following error.
 
 /usr/sbin/squid start
 sh: (null): not found
 FATAL: No valid signing SSL certificate configured for https_port
 0.0.0.0:3129
 Squid Cache (Version 3.3.1): Terminated abnormally.
 CPU Usage: 0.050 seconds = 0.050 user + 0.000 sys
 Maximum Resident Size: 0 KB
 Page faults with physical i/o: 0
 
 
  I also tried with just my cert and private key without the chain
 information and I get the same error there also. Am I missing
 something here?
 
 Regards,
 Prasanna
 







Re: [squid-users] Need help with ACL is used but there is no HTTP request -- not matching

2013-04-03 Thread Pavel Bychykhin



02.04.2013 17:06, Amos Jeffries пишет:


Right now I'm interested in the back trace / stack trace of what code is 
leading up to the assertion.



# gdb /usr/local/sbin/squid ./squid.core.0
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as amd64-marcel-freebsd...
Core was generated by `squid'.
Program terminated with signal 6, Aborted.
Reading symbols from /usr/lib/librt.so.1...done.
Loaded symbols for /usr/lib/librt.so.1
Reading symbols from /usr/lib/libgssapi.so.10...done.
Loaded symbols for /usr/lib/libgssapi.so.10
Reading symbols from /usr/lib/libheimntlm.so.10...done.
Loaded symbols for /usr/lib/libheimntlm.so.10
Reading symbols from /usr/lib/libkrb5.so.10...done.
Loaded symbols for /usr/lib/libkrb5.so.10
Reading symbols from /usr/lib/libhx509.so.10...done.
Loaded symbols for /usr/lib/libhx509.so.10
Reading symbols from /usr/lib/libcom_err.so.5...done.
Loaded symbols for /usr/lib/libcom_err.so.5
Reading symbols from /lib/libcrypto.so.6...done.
Loaded symbols for /lib/libcrypto.so.6
Reading symbols from /usr/lib/libasn1.so.10...done.
Loaded symbols for /usr/lib/libasn1.so.10
Reading symbols from /usr/lib/libroken.so.10...done.
Loaded symbols for /usr/lib/libroken.so.10
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /usr/local/lib/libltdl.so.7...done.
Loaded symbols for /usr/local/lib/libltdl.so.7
Reading symbols from /usr/lib/libstdc++.so.6...done.
Loaded symbols for /usr/lib/libstdc++.so.6
Reading symbols from /lib/libm.so.5...done.
Loaded symbols for /lib/libm.so.5
Reading symbols from /lib/libgcc_s.so.1...done.
Loaded symbols for /lib/libgcc_s.so.1
Reading symbols from /lib/libthr.so.3...done.
Loaded symbols for /lib/libthr.so.3
Reading symbols from /lib/libc.so.7...done.
Loaded symbols for /lib/libc.so.7
Reading symbols from /usr/local/lib/nss_winbind.so.1...done.
Loaded symbols for /usr/local/lib/nss_winbind.so.1
Reading symbols from /usr/local/lib/nss_wins.so.1...done.
Loaded symbols for /usr/local/lib/nss_wins.so.1
Reading symbols from /usr/local/lib/libldap-2.4.so.8...done.
Loaded symbols for /usr/local/lib/libldap-2.4.so.8
Reading symbols from /usr/local/lib/liblber-2.4.so.8...done.
Loaded symbols for /usr/local/lib/liblber-2.4.so.8
Reading symbols from /usr/local/lib/libexecinfo.so.1...done.
Loaded symbols for /usr/local/lib/libexecinfo.so.1
Reading symbols from /lib/libmd.so.5...done.
Loaded symbols for /lib/libmd.so.5
Reading symbols from /usr/local/lib/libiconv.so.3...done.
Loaded symbols for /usr/local/lib/libiconv.so.3
Reading symbols from /usr/local/lib/libtalloc.so.2...done.
Loaded symbols for /usr/local/lib/libtalloc.so.2
Reading symbols from /usr/local/lib/libtdb.so.1...done.
Loaded symbols for /usr/local/lib/libtdb.so.1
Reading symbols from /lib/libz.so.6...done.
Loaded symbols for /lib/libz.so.6
Reading symbols from /usr/local/lib/libsasl2.so.3...done.
Loaded symbols for /usr/local/lib/libsasl2.so.3
Reading symbols from /usr/lib/libssl.so.6...done.
Loaded symbols for /usr/lib/libssl.so.6
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  0x000802f6bbbc in thr_kill () from /lib/libc.so.7
[New Thread 803407400 (LWP 101256/squid)]
(gdb) backtrace
#0  0x000802f6bbbc in thr_kill () from /lib/libc.so.7
#1  0x000803008e7b in abort () from /lib/libc.so.7
#2  0x005452ad in xassert (msg=0x773577 c-locks  0, file=0x773529 
cbdata.cc, line=463)
at debug.cc:567
#3  0x0050bbc6 in cbdataInternalUnlockDbg (p=0x8034a5158, file=0x7ae0b0 
Checklist.cc, line=273)
at cbdata.cc:463
#4  0x006b6829 in ~ACLChecklist (this=0x7fffd4f0) at 
Checklist.cc:273
#5  0x00671b56 in ~ACLFilledChecklist (this=0x7fffd4f0) at 
FilledChecklist.cc:101
#6  0x0051dcfb in httpAccept (params=@0x8034a0978) at 
client_side.cc:3279
#7  0x006cc01e in CommAcceptCbPtrFun::dial (this=0x8034a0970) at 
CommCalls.cc:136
#8  0x00529109 in CommCbFunPtrCallTCommAcceptCbPtrFun::fire 
(this=0x8034a0940) at CommCalls.h:381
#9  0x006b8294 in AsyncCall::make (this=0x8034a0940) at AsyncCall.cc:36
#10 0x006bc67c in AsyncCallQueue::fireNext (this=0x8034298d0) at 
AsyncCallQueue.cc:54
#11 0x006bc7df in AsyncCallQueue::fire (this=0x8034298d0) at 
AsyncCallQueue.cc:40
#12 0x005699e9 in EventLoop::dispatchCalls (this=0x7fffd960) at 
EventLoop.cc:154
#13 0x00569d52 in EventLoop::runOnce (this=0x7fffd960) at 
EventLoop.cc:131
#14 0x00569e9e in EventLoop::run (this=0x7fffd960) at 
EventLoop.cc:95
#15 0x005dc7ac in SquidMain (argc=3, argv=0x7fffdb10) at 
main.cc:1501
#16 

Re: [squid-users] Need help with ACL is used but there is no HTTP request -- not matching

2013-04-02 Thread Amos Jeffries

On 2/04/2013 11:26 p.m., Pavel Bychykhin wrote:

Hi All!

My system is FreeBSD 9.0
My SQUID ver. is 3.2.9.

Recently i tried to define some rules for the client delay pools.
Here part from my config:

acl to_rfc1579 dst 192.168.0.0/16
acl to_rfc1579 dst 10.0.0.0/8
acl to_rfc1579 dst 172.16.0.0/12

client_delay_pools 1
client_delay_parameters 1 16384 16384
client_delay_access 1 allow all !to_rfc1579

After that Squid died, and i see in log:

2013/04/02 10:48:56 kid1| ACL::checklistMatches WARNING: 'to_rfc1579' 
ACL is used but there is no HTTP request -- not matching

2013/04/02 10:48:56 kid1| assertion failed: cbdata.cc:463: c-locks  0


If you are able to run Squid in a debugger I'm very interested in seeing 
a stack trace from that assertion.




Is it a bug, or i just don't understand something about an access lists.


Both. Assert is always a bug and the client_delay_pool operates right 
after the TCP SYN is accept()'ed.


client_delay_access is tested as soon as the TCP SYN packet has been 
accepted. All Squid has for ACLs to work with at that point is the 
IP:port of each end of the client TCP connection.


client_delay_access can be used with:  src, arp, localip / myip, 
localport / myport.
  myportname ACL should in theory work as well, but looking at the 
code I see the required details are not yet passed to the ACL code 
properly so that is broken.


The dst ACL is for testing the destination IP address an HTTP request 
might be going to. It requires an HTTP request URL to locate a domain 
name then DNS to locate the IP addresses.


Amos


Re: [squid-users] Need help with ACL is used but there is no HTTP request -- not matching

2013-04-02 Thread Pavel Bychykhin

If you give me an instructions how to run Squid in a debugger and what kind of 
a results you expect,
i could do it on the next Saturday or Sunday.
Also, could you answer for the next question:
Client delay pools is the tool to limit what client sends to internet (upload 
bandwidth)?
I'm looking for a way to limit the per-client upload stream.
If The Client delay pools serves another purpose, i just forget about it 
feature.

02.04.2013 13:52, Amos Jeffries пишет:

On 2/04/2013 11:26 p.m., Pavel Bychykhin wrote:

Hi All!

My system is FreeBSD 9.0
My SQUID ver. is 3.2.9.

Recently i tried to define some rules for the client delay pools.
Here part from my config:

acl to_rfc1579 dst 192.168.0.0/16
acl to_rfc1579 dst 10.0.0.0/8
acl to_rfc1579 dst 172.16.0.0/12

client_delay_pools 1
client_delay_parameters 1 16384 16384
client_delay_access 1 allow all !to_rfc1579

After that Squid died, and i see in log:

2013/04/02 10:48:56 kid1| ACL::checklistMatches WARNING: 'to_rfc1579' ACL is 
used but there is no HTTP request -- not matching
2013/04/02 10:48:56 kid1| assertion failed: cbdata.cc:463: c-locks  0


If you are able to run Squid in a debugger I'm very interested in seeing a 
stack trace from that assertion.



Is it a bug, or i just don't understand something about an access lists.


Both. Assert is always a bug and the client_delay_pool operates right after the 
TCP SYN is accept()'ed.

client_delay_access is tested as soon as the TCP SYN packet has been accepted. 
All Squid has for ACLs to work with at that point is the IP:port of
each end of the client TCP connection.

client_delay_access can be used with:  src, arp, localip / myip, localport / 
myport.
   myportname ACL should in theory work as well, but looking at the code I 
see the required details are not yet passed to the ACL code properly so
that is broken.

The dst ACL is for testing the destination IP address an HTTP request might be 
going to. It requires an HTTP request URL to locate a domain name then
DNS to locate the IP addresses.

Amos



--
Best regards,
Pavel


Re: [squid-users] Need help with ACL is used but there is no HTTP request -- not matching

2013-04-02 Thread Amos Jeffries

On 3/04/2013 12:59 a.m., Pavel Bychykhin wrote:
If you give me an instructions how to run Squid in a debugger and what 
kind of a results you expect,

i could do it on the next Saturday or Sunday.


A how-to is at http://wiki.squid-cache.org/SquidFaq/BugReporting
There are details for running Squid under a debugger with zero-downtime 
on a production server if you need that.


Right now I'm interested in the back trace / stack trace of what code is 
leading up to the assertion.




Also, could you answer for the next question:
Client delay pools is the tool to limit what client sends to internet 
(upload bandwidth)?

I'm looking for a way to limit the per-client upload stream.
If The Client delay pools serves another purpose, i just forget about 
it feature.




Yes, that is the feature that does per-client traffic control. It just 
does so from the first bytes arriving from the client. Long before most 
of the ACL data is available for use.

So you need to decide how to limit the client based on their TCP details.



02.04.2013 13:52, Amos Jeffries пишет:

On 2/04/2013 11:26 p.m., Pavel Bychykhin wrote:

Hi All!

My system is FreeBSD 9.0
My SQUID ver. is 3.2.9.

Recently i tried to define some rules for the client delay pools.
Here part from my config:

acl to_rfc1579 dst 192.168.0.0/16
acl to_rfc1579 dst 10.0.0.0/8
acl to_rfc1579 dst 172.16.0.0/12

client_delay_pools 1
client_delay_parameters 1 16384 16384
client_delay_access 1 allow all !to_rfc1579

After that Squid died, and i see in log:

2013/04/02 10:48:56 kid1| ACL::checklistMatches WARNING: 
'to_rfc1579' ACL is used but there is no HTTP request -- not matching
2013/04/02 10:48:56 kid1| assertion failed: cbdata.cc:463: c-locks 
 0


If you are able to run Squid in a debugger I'm very interested in 
seeing a stack trace from that assertion.




Is it a bug, or i just don't understand something about an access 
lists.


Both. Assert is always a bug and the client_delay_pool operates right 
after the TCP SYN is accept()'ed.


client_delay_access is tested as soon as the TCP SYN packet has been 
accepted. All Squid has for ACLs to work with at that point is the 
IP:port of

each end of the client TCP connection.

client_delay_access can be used with:  src, arp, localip / myip, 
localport / myport.
   myportname ACL should in theory work as well, but looking at the 
code I see the required details are not yet passed to the ACL code 
properly so

that is broken.

The dst ACL is for testing the destination IP address an HTTP request 
might be going to. It requires an HTTP request URL to locate a domain 
name then

DNS to locate the IP addresses.

Amos







Re: [squid-users] Need help with Squid reverse proxy with mirrored parents please!

2013-03-27 Thread Amos Jeffries

On 28/03/2013 12:28 p.m., Alex Stahl wrote:

Hiya Squid Users - So I'm trying to configure Squid as a reverse
proxy, listening on port 80, in front of two web servers.  One web
server runs on the localhost and listens on port 81 and contains a
subset of all website content.  Then the second web server is a remote
box, listening on port 80, with a full set of all content.

What I'd like Squid to do is act as a single front-end for these
servers.  A request comes in, and if it's a cache miss, it should
first ask the localhost web server if it can satisfy the request.  If
so, it serves it up.  If not, it should forward it on to the second
web server.


Which squid vesion? the presence or absence of vhost settings depends on it.


Following the guide here:
http://wiki.squid-cache.org/ConfigExamples/Reverse/MultipleWebservers,
I've come up with the following config:

http_port 80 accel defaultsite=localhost


Problem #1 Use the public FQDN name in defaultsite=.
The above config will make some URLs handled by Squid all be 
http://localhost/... which is NOT a good thing when those URLs are sent 
out to the client.




cache_peer localhost parent 81 0 originserver name=local
cache_peer example.com parent 80 0 originserver name=remote
acl request dstdomain localhost
cache_peer_access local allow request
cache_peer_access remote allow request

(I have other ACLs unrelated to this config, such as allowing http
requests on port 80).

The problem I run into is that a miss on the localhost web server (an
HTTP 404) isn't properly forwarded on to the remote server - squid
only ever tries a single parent.  If I remove the localhost peer, the
request is properly forwarded, and I get back the expected HTTP 200.

What am I missing in my config to make it do that?


404 means does not exist. How is Squid to know that the localhost peer 
was lying and some other peer does have the object?


* Fix the defaultsite=localhost problem
* Add vhost to your http_port line to make Squid aware of what domains 
requests are for.
* alter your request ACL  into different ACLs which match against 
requests destined to each server. Such that only the server where the 
request can come from is contacted.


Amos


Re: [squid-users] Need help with Squid reverse proxy with mirrored parents please!

2013-03-27 Thread Alex Stahl
Thanks for the suggestions... although  I can't tell if they work just
yet.  Squid version is 3.1.10; I'm restricted in my choice here and
unfortunately cannot upgrade.

I do think the crux of my issue lies in exactly your point regarding
ACLs.  Per your advice, and the write-up at the link I referenced, the
conf should have ACLs upon which Squid can select which origin server
to ask for a given object.  In the example, they switch on either
cache_peer_domain or urlpath_regex.  My issue is that I want to switch
on the presence (or lack thereof) of an object on a web server.

Here's some pseudocode to express this
1. Request object from local peer
2a. If local peer has object, return that
2b. If local peer does not have object, request object from remote peer
3a. If remote peer has object, return that
3b. If remote peer does not have object, now return a 404

Currently I get the 404 after step #2b.  So how would I create ACLs
that express this?

To provide a little more insight, this is for a provisioning system
where a local server contains a subset of the contents of a yum repo
(i.e. the local peer).  The full set of contents of the yum repo can
be found on the remote peer.  The local server is used to stand up a
bare-bones install on its clients.

Thanks in advance,
Alex

On Wed, Mar 27, 2013 at 4:59 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 28/03/2013 12:28 p.m., Alex Stahl wrote:

 Hiya Squid Users - So I'm trying to configure Squid as a reverse
 proxy, listening on port 80, in front of two web servers.  One web
 server runs on the localhost and listens on port 81 and contains a
 subset of all website content.  Then the second web server is a remote
 box, listening on port 80, with a full set of all content.

 What I'd like Squid to do is act as a single front-end for these
 servers.  A request comes in, and if it's a cache miss, it should
 first ask the localhost web server if it can satisfy the request.  If
 so, it serves it up.  If not, it should forward it on to the second
 web server.


 Which squid vesion? the presence or absence of vhost settings depends on it.


 Following the guide here:
 http://wiki.squid-cache.org/ConfigExamples/Reverse/MultipleWebservers,
 I've come up with the following config:

 http_port 80 accel defaultsite=localhost


 Problem #1 Use the public FQDN name in defaultsite=.
 The above config will make some URLs handled by Squid all be
 http://localhost/... which is NOT a good thing when those URLs are sent out
 to the client.



 cache_peer localhost parent 81 0 originserver name=local
 cache_peer example.com parent 80 0 originserver name=remote
 acl request dstdomain localhost
 cache_peer_access local allow request
 cache_peer_access remote allow request

 (I have other ACLs unrelated to this config, such as allowing http
 requests on port 80).

 The problem I run into is that a miss on the localhost web server (an
 HTTP 404) isn't properly forwarded on to the remote server - squid
 only ever tries a single parent.  If I remove the localhost peer, the
 request is properly forwarded, and I get back the expected HTTP 200.

 What am I missing in my config to make it do that?


 404 means does not exist. How is Squid to know that the localhost peer was
 lying and some other peer does have the object?

 * Fix the defaultsite=localhost problem
 * Add vhost to your http_port line to make Squid aware of what domains
 requests are for.
 * alter your request ACL  into different ACLs which match against requests
 destined to each server. Such that only the server where the request can
 come from is contacted.

 Amos


Re: [squid-users] need help from somebody installed videocache with squid !

2013-03-23 Thread Amos Jeffries

On 23/03/2013 8:46 p.m., Ahmad wrote:

hi ,
i just want somebody who had tried videocache with squid .

recently i could finally chained squidguard  with videocache and seems ok !

but the problem is i have freqeunt error in videocache !


Please contact the developers of videocache for this problem. It was 
their choice to create the software and their responsibility to maintain 
it in the face of constant changes by the youtube developers.


Squid-2.7 introduced the store-URL feature which allowed direct cache 
entry de-duplication in Squid of content from sites like YouTube. You 
might want to try that instead.


Thank you
Amos



Re: [squid-users] NEED HELP: How to complie Squid 3.2.6 with TCMALLOC for high performance ??

2013-01-12 Thread Amos Jeffries

On 13/01/2013 4:58 p.m., stanley wrote:

I did the steps :

1:install libunwind lib. (version is : libunwind-1.1)

2:install google-perftools (version is : gperftools-2.0)

3:download squid-3.2.5.tar.gz  extract , then configure , then


configure how? with what command line?



modify /src/Makefile follow the article :

# vi src/Makefile

squid_LDADD = \
 -L../lib \
 -ltcmalloc_minimal \
  \
..
data_DATA = \
 mib.txt
LDADD = -L../lib -lmiscutil -lpthread -lm -ltcmalloc_minimal
  


i did not find the  string  -L../lib \ .

  


please help me , how to do that . thank u very much .


You should not need to modify Makefiles manually at any point.

All you should need is:
  ./configure CXXFLAGS=-ltcmalloc_minimal CFLAGS=-ltcmalloc_minimal


Amos


RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-06-08 Thread Ruiyuan Jiang
Hi, Amos

I tried squid v3.2.0.17 on a Redhat enterprise server v6.2, x86_64 and it did 
not work for NTLM authentication. I just kept getting user name and password 
prompt when I access the site after I put in the user name and password. In the 
squid log, it shows below two entries repetitively:

TCP_MISS/401 1672 GET https://webmail.site.com/ - FIRSTUP_PARENT/10.10.10.10 
text/html
TCP_MISS/401 293 POST https://webmail.site.com/ews/Exchange.asmx - 
FIRSTUP_PARENT/10.10.10.10 -

I used the option --enable-ntlm-fail-open when I compiled squid 3.2.0.17.

Ruiyuan

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, May 29, 2012 8:06 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

On 30.05.2012 03:11, Ruiyuan Jiang wrote:
 Thanks for the response Amos. Do you think is it worth to test it
 squid v3.2.x on my Solaris box for NTLM auth? I don't have any 
 problem
 to test it out.


I think it is worth it. 3.2 is HTTP/1.1 and avoids all the HTTP/1.0 
issues which may still crop up with 3.1.

Amos




This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.


RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-29 Thread Ruiyuan Jiang
Thanks for the response Amos. Do you think is it worth to test it squid v3.2.x 
on my Solaris box for NTLM auth? I don't have any problem to test it out.

Ruiyuan


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Sunday, May 27, 2012 6:10 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

On 25/05/2012 7:50 a.m., Ruiyuan Jiang wrote:
 Hi, Clem

 I am reading your post

 http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

 In the post, someone stated that NTLM auth does not support:

 It's facing the double hop issue, ntlm credentials can be sent only on one 
 hop, and is lost with 2 hops like : client -  squid (hop1) IIS6 rpx proxy 
 (hop2) -  exchange 2007

 That is not true. Here we have the setup:

 Client -  Apache (hop1) -  IIS 7 -  exchange 2007

 It works the setup and just I could not have the latest Apache. Otherwise I 
 will continue to use Apache reverse proxy. The latest Apache does not support 
 MS RPC over http which is posted on the internet.

 https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

 I am not sure why squid does not support NTLM auth to the backend exchange 
 server.

Squid does supports relaying any type of www-auth headers to the backend 
over multiple hops. What Squid does not support is logging *itself* into 
a peer proxy with NTLM (using proxy-auth headers).

There are also various minor but annoying bugs in NTLM pinning support 
and persistent connections handling in some Squid releases, with those 
basically the newer the Squid release the better but its still not 100% 
clean.

  I am noting a LOT of complaints in the areas of Squid-IIS and 
sharepoint, and a few other MS products this year. But nobody has yet 
been able to supply a patch for anything (I dont have MS products or 
time to work on this stuff myself). There is a hint that it is related 
to Squid-3.1 persistent connection keep-alive to the server, if that 
helps anyone.

Amos



This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.



RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-29 Thread Amos Jeffries

On 30.05.2012 03:11, Ruiyuan Jiang wrote:

Thanks for the response Amos. Do you think is it worth to test it
squid v3.2.x on my Solaris box for NTLM auth? I don't have any 
problem

to test it out.



I think it is worth it. 3.2 is HTTP/1.1 and avoids all the HTTP/1.0 
issues which may still crop up with 3.1.


Amos



Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-27 Thread Amos Jeffries

On 25/05/2012 7:50 a.m., Ruiyuan Jiang wrote:

Hi, Clem

I am reading your post

http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

In the post, someone stated that NTLM auth does not support:

It's facing the double hop issue, ntlm credentials can be sent only on one hop, and 
is lost with 2 hops like : client -  squid (hop1) IIS6 rpx proxy (hop2) -  
exchange 2007

That is not true. Here we have the setup:

Client -  Apache (hop1) -  IIS 7 -  exchange 2007

It works the setup and just I could not have the latest Apache. Otherwise I 
will continue to use Apache reverse proxy. The latest Apache does not support 
MS RPC over http which is posted on the internet.

https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

I am not sure why squid does not support NTLM auth to the backend exchange 
server.


Squid does supports relaying any type of www-auth headers to the backend 
over multiple hops. What Squid does not support is logging *itself* into 
a peer proxy with NTLM (using proxy-auth headers).


There are also various minor but annoying bugs in NTLM pinning support 
and persistent connections handling in some Squid releases, with those 
basically the newer the Squid release the better but its still not 100% 
clean.


 I am noting a LOT of complaints in the areas of Squid-IIS and 
sharepoint, and a few other MS products this year. But nobody has yet 
been able to supply a patch for anything (I dont have MS products or 
time to work on this stuff myself). There is a hint that it is related 
to Squid-3.1 persistent connection keep-alive to the server, if that 
helps anyone.


Amos


Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-27 Thread Amos Jeffries

On 26/05/2012 1:34 a.m., Ruiyuan Jiang wrote:

Hi, Clem

In the Apache link that I provided, it stated that below Apache v2.0.58 
supports RPC over HTTP. Any version of Apache above that version does not 
support RPC. Two reasons:

1. it is not a standard.
2. patents by Microsoft if Apache uses it.


Patents?

RPC over HTTP is required to fit within HTTP standard operational 
behaviour. If it were breaking protocol requirements, that would explain 
why Squid, which does obey HTTP standards was breaking as an 
RPC-over-HTTP relay.


FYI: The body content of the HTTP messages is the RPC protocol under 
patent, possibly the method names themselves. Neither Squid nor Apache 
when proxying have any reason to touch those details and thus are not 
affected by any such patents (unless they are made to do so).


Amos



Ruiyuan Jiang


-Original Message-
From: Clem [mailto:clemf...@free.fr]
Sent: Friday, May 25, 2012 2:19 AM
To: Ruiyuan Jiang
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

Hi Ruiyuan,

  Client -  Apache (hop1) -  IIS 7 -  exchange 2007 It works the setup
and just I could not have the latest Apache. Otherwise I will continue
to use Apache reverse proxy. The latest Apache does not support MS RPC
over http which is posted on the internet.

What do you mean when you say that the latest Apache does not support MS
RPC OVER HTTP, whereas your version supports it ?? That's not make sense ?

If I can do Client -  Apache reverse proxy -  IIS RPC -  exchange 2007,
I'll install it as soon as possible !

Thx

Clem


Le 24/05/2012 21:52, Ruiyuan Jiang a écrit :

By the way, NTLM works with windows 7 client through Apache here.


Hi, Clem

I am reading your post

http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

In the post, someone stated that NTLM auth does not support:

It's facing the double hop issue, ntlm credentials can be sent only on one hop, and 
is lost with 2 hops like : client -   squid (hop1) IIS6 rpx proxy (hop2) -   
exchange 2007

That is not true. Here we have the setup:

Client -   Apache (hop1) -   IIS 7 -   exchange 2007

It works the setup and just I could not have the latest Apache. Otherwise I 
will continue to use Apache reverse proxy. The latest Apache does not support 
MS RPC over http which is posted on the internet.

https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

I am not sure why squid does not support NTLM auth to the backend exchange 
server.

Ruiyuan





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.





Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-25 Thread Clem

Hi Ruiyuan,

Client - Apache (hop1) - IIS 7 - exchange 2007 It works the setup 
and just I could not have the latest Apache. Otherwise I will continue 
to use Apache reverse proxy. The latest Apache does not support MS RPC 
over http which is posted on the internet.


What do you mean when you say that the latest Apache does not support MS 
RPC OVER HTTP, whereas your version supports it ?? That's not make sense ?


If I can do Client - Apache reverse proxy - IIS RPC - exchange 2007, 
I'll install it as soon as possible !


Thx

Clem


Le 24/05/2012 21:52, Ruiyuan Jiang a écrit :

By the way, NTLM works with windows 7 client through Apache here.


Hi, Clem

I am reading your post

http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

In the post, someone stated that NTLM auth does not support:

It's facing the double hop issue, ntlm credentials can be sent only on one hop, and 
is lost with 2 hops like : client -  squid (hop1) IIS6 rpx proxy (hop2) -  
exchange 2007

That is not true. Here we have the setup:

Client -  Apache (hop1) -  IIS 7 -  exchange 2007

It works the setup and just I could not have the latest Apache. Otherwise I 
will continue to use Apache reverse proxy. The latest Apache does not support 
MS RPC over http which is posted on the internet.

https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

I am not sure why squid does not support NTLM auth to the backend exchange 
server.

Ruiyuan





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.




RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-25 Thread Ruiyuan Jiang
Hi, Clem

In the Apache link that I provided, it stated that below Apache v2.0.58 
supports RPC over HTTP. Any version of Apache above that version does not 
support RPC. Two reasons:

1. it is not a standard.
2. patents by Microsoft if Apache uses it.

Ruiyuan Jiang


-Original Message-
From: Clem [mailto:clemf...@free.fr] 
Sent: Friday, May 25, 2012 2:19 AM
To: Ruiyuan Jiang
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

Hi Ruiyuan,

 Client - Apache (hop1) - IIS 7 - exchange 2007 It works the setup 
and just I could not have the latest Apache. Otherwise I will continue 
to use Apache reverse proxy. The latest Apache does not support MS RPC 
over http which is posted on the internet.

What do you mean when you say that the latest Apache does not support MS 
RPC OVER HTTP, whereas your version supports it ?? That's not make sense ?

If I can do Client - Apache reverse proxy - IIS RPC - exchange 2007, 
I'll install it as soon as possible !

Thx

Clem


Le 24/05/2012 21:52, Ruiyuan Jiang a écrit :
 By the way, NTLM works with windows 7 client through Apache here.


 Hi, Clem

 I am reading your post

 http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

 In the post, someone stated that NTLM auth does not support:

 It's facing the double hop issue, ntlm credentials can be sent only on one 
 hop, and is lost with 2 hops like : client -  squid (hop1) IIS6 rpx proxy 
 (hop2) -  exchange 2007

 That is not true. Here we have the setup:

 Client -  Apache (hop1) -  IIS 7 -  exchange 2007

 It works the setup and just I could not have the latest Apache. Otherwise I 
 will continue to use Apache reverse proxy. The latest Apache does not support 
 MS RPC over http which is posted on the internet.

 https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

 I am not sure why squid does not support NTLM auth to the backend exchange 
 server.

 Ruiyuan





 This message (including any attachments) is intended
 solely for the specific individual(s) or entity(ies) named
 above, and may contain legally privileged and
 confidential information. If you are not the intended
 recipient, please notify the sender immediately by
 replying to this message and then delete it.
 Any disclosure, copying, or distribution of this message,
 or the taking of any action based on it, by other than the
 intended recipient, is strictly prohibited.





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.



RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-24 Thread Ruiyuan Jiang
Thanks for the reply, Clem.

We use NTLM for authentication. We may be able to enable HTTP authentication 
for the virtual directory (/rpc) but we may not be able to do that for the 
whole exchange since some other programs use NTLM auth.

After I posted the message, I compared my Apache reverse proxy server log for 
MS RPC and squid's log for MS RPC. I noticed the message are the same (http 
code 200 and 401). I used very old Apache for that since newer Apache does not 
support MS RPC over http.

Ruiyuan Jiang


-Original Message-
From: Clem [mailto:clemf...@free.fr] 
Sent: Thursday, May 24, 2012 1:47 AM
To: Ruiyuan Jiang
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

Hello Ruiyan,

Which auth have you set in your outlook anywhere setting ? Squid works 
fine with Basic but has big troubles with NTLM.

regards

Clem

Le 23/05/2012 22:38, Ruiyuan Jiang a écrit :
 Hi, when I tried to test accessing MS exchange server, the outlook just kept 
 prompt for the user name and password without luck. Here is the message from 
 squid's access.log from the test:

 1337803935.354  6 207.46.14.62 TCP_MISS/200 294 RPC_IN_DATA 
 https://webmail.juicycouture.com/Rpc/RpcProxy.dll - PINNED/exchangeServer 
 application/rpc
 1337803937.876  6 207.46.14.62 TCP_MISS/401 666 RPC_IN_DATA 
 https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
 FIRST_UP_PARENT/exchangeServer text/html
 1337803937.965 11 207.46.14.62 TCP_MISS/401 389 RPC_IN_DATA 
 https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
 FIRST_UP_PARENT/exchangeServer text/html
 1337803938.144  6 207.46.14.62 TCP_MISS/401 666 RPC_OUT_DATA 
 https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
 FIRST_UP_PARENT/exchangeServer text/html
 1337803938.229  6 207.46.14.62 TCP_MISS/401 389 RPC_OUT_DATA 
 https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
 FIRST_UP_PARENT/exchangeServer text/html


 Here is my squid.conf for the test:

 https_port 156.146.2.196:443 accel 
 cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
 key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
 cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
 defaultsite=webmail.juicycouture.com

 cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
 sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

 acl EXCH dstdomain .juicycouture.com

 cache_peer_access exchangeServer allow EXCH
 cache_peer_access exchangeServer deny all
 never_direct allow EXCH

 http_access allow EXCH
 http_access deny all
 miss_access allow EXCH
 miss_access deny all


 Where did I do wrong? I also tried a different squid.conf (basically remove 
 all the ACLs) but got the same message in access.log:

 https_port 156.146.2.196:443 accel 
 cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
 key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
 cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
 defaultsite=webmail.juicycouture.com

 cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
 sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

 cache_peer_access exchangeServer allow all

 http_access allow all
 miss_access allow all

 Thanks.

 Ryan Jiang



 This message (including any attachments) is intended
 solely for the specific individual(s) or entity(ies) named
 above, and may contain legally privileged and
 confidential information. If you are not the intended
 recipient, please notify the sender immediately by
 replying to this message and then delete it.
 Any disclosure, copying, or distribution of this message,
 or the taking of any action based on it, by other than the
 intended recipient, is strictly prohibited.





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.



RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-24 Thread Ruiyuan Jiang
Hi, Clem

I am reading your post 

http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

In the post, someone stated that NTLM auth does not support:

It's facing the double hop issue, ntlm credentials can be sent only on one hop, 
and is lost with 2 hops like : client - squid (hop1) IIS6 rpx proxy (hop2) - 
exchange 2007

That is not true. Here we have the setup:

Client - Apache (hop1) - IIS 7 - exchange 2007 

It works the setup and just I could not have the latest Apache. Otherwise I 
will continue to use Apache reverse proxy. The latest Apache does not support 
MS RPC over http which is posted on the internet.

https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

I am not sure why squid does not support NTLM auth to the backend exchange 
server.

Ruiyuan





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.



Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-23 Thread Clem

Hello Ruiyan,

Which auth have you set in your outlook anywhere setting ? Squid works 
fine with Basic but has big troubles with NTLM.


regards

Clem

Le 23/05/2012 22:38, Ruiyuan Jiang a écrit :

Hi, when I tried to test accessing MS exchange server, the outlook just kept 
prompt for the user name and password without luck. Here is the message from 
squid's access.log from the test:

1337803935.354  6 207.46.14.62 TCP_MISS/200 294 RPC_IN_DATA 
https://webmail.juicycouture.com/Rpc/RpcProxy.dll - PINNED/exchangeServer 
application/rpc
1337803937.876  6 207.46.14.62 TCP_MISS/401 666 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803937.965 11 207.46.14.62 TCP_MISS/401 389 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803938.144  6 207.46.14.62 TCP_MISS/401 666 RPC_OUT_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803938.229  6 207.46.14.62 TCP_MISS/401 389 RPC_OUT_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html


Here is my squid.conf for the test:

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

acl EXCH dstdomain .juicycouture.com

cache_peer_access exchangeServer allow EXCH
cache_peer_access exchangeServer deny all
never_direct allow EXCH

http_access allow EXCH
http_access deny all
miss_access allow EXCH
miss_access deny all


Where did I do wrong? I also tried a different squid.conf (basically remove all 
the ACLs) but got the same message in access.log:

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

cache_peer_access exchangeServer allow all

http_access allow all
miss_access allow all

Thanks.

Ryan Jiang



This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.




Re: [squid-users] Need help to build my own external help

2012-04-17 Thread Mohamed Amine Kadimi

  I jump in the middle of the conversation but,
  the return will constantly end the helper...
  It is supposed to loop forever.
  I used to use this:

  #define INPUTSIZE 8192 char input[INPUTSIZE]; while (fgets(input,
 sizeof(input), stdin)) { if ((cp=strchr(input, '\n')) == NULL) {
 fprintf(stderr, filter: input too big: %s\n, input); } else {
 *cp = '\0'; } ... fflush(stderr); fflush(stdout); }

  JD

 Actually, it should loop forever because the return is outside the
 while (fgets(...) != NULL) and fgets is supposed to not return NULL
 unless some error occurs.

 Also refer to the source code of ext_session_acl which has a return 0
 at the end.

 Ah... my bad...  I jumped too fast  ^_^
 I saw The src_ip_ext helpers are crashing too rapidly and ran to a bad 
 conclusion.
 Tried to debug by printing from the helper to stderr?
 Tried negative_ttl=0 ?
 Tried debug_options ALL,1 33,2 ?

 JD

I'm getting these logs when debug is On:

##
2012/04/17 12:11:18.200| ACLChecklist::preCheck: 0xa595a30 checking
'http_access allow src_ip'
2012/04/17 12:11:18.200| ACLList::matches: checking src_ip
2012/04/17 12:11:18.200| ACL::checklistMatches: checking 'src_ip'
2012/04/17 12:11:18.201| ACL::ChecklistMatches: result for 'src_ip' is -1
2012/04/17 12:11:18.201| ACLList::matches: result is false
2012/04/17 12:11:18.201| aclmatchAclList: 0xa595a30 returning false
(AND list entry failed to match)
2012/04/17 12:11:18.201| ACL::FindByName 'src_ip'
2012/04/17 12:11:18.201| ACLChecklist::asyncInProgress: 0xa595a30 async set to 1
2012/04/17 12:11:18.201| aclmatchAclList: async=1 nodeMatched=0
async_in_progress=1 lastACLResult() = 0 finished() = 0
##


And after five requests, I get:

##
2012/04/17 12:15:18.944| ACLChecklist::preCheck: 0xa5acd40 checking
'http_access allow src_ip'
2012/04/17 12:15:18.944| ACLList::matches: checking src_ip
2012/04/17 12:15:18.944| ACL::checklistMatches: checking 'src_ip'
2012/04/17 12:15:18.944| ACL::ChecklistMatches: result for 'src_ip' is -1
2012/04/17 12:15:18.944| ACLList::matches: result is false
2012/04/17 12:15:18.944| aclmatchAclList: 0xa5acd40 returning false
(AND list entry failed to match)
2012/04/17 12:15:18.944| ACL::FindByName 'src_ip'
2012/04/17 12:15:18.944| ACLChecklist::asyncInProgress: 0xa5acd40 async set to 1
2012/04/17 12:15:18.944| WARNING: All srcip processes are busy.
2012/04/17 12:15:18.944| WARNING: 5 pending requests queued
2012/04/17 12:15:18.944| Consider increasing the number of srcip
processes in your config file.
2012/04/17 12:15:18.945| aclmatchAclList: async=1 nodeMatched=0
async_in_progress=1 lastACLResult() = 0 finished() = 0
##

I can't figure out why the requests are being queued indefinitely in the helper.

Here's my squid.conf:

##
debug_options ALL,1 33,2 28,9
external_acl_type srcip negative_ttl=0 %URI /usr/lib/squid3/src_ip

acl src_ip external srcip

http_access allow src_ip
http_access deny all

http_port 3128
##

-- 
Mohamed Amine Kadimi

Tél     : +212 (0) 675 72 36 45


Re: [squid-users] Need help to build my own external help

2012-04-17 Thread John Doe
From: Mohamed Amine Kadimi amine.kad...@gmail.com

  *cp = '\0'; } ... fflush(stderr); fflush(stdout); }
 I can't figure out why the requests are being queued indefinitely in the 
 helper.

Are you flushing stdout?

JD


Re: [squid-users] Need help to build my own external help

2012-04-17 Thread Mohamed Amine Kadimi
Solved with:

fflush(stderr); fflush(stdout);

Many thanks John and Amos!

  *cp = '\0'; } ... fflush(stderr); fflush(stdout); }
 I can't figure out why the requests are being queued indefinitely in the
 helper.

 Are you flushing stdout?

 JD


-- 
Mohamed Amine Kadimi

Tél     : +212 (0) 675 72 36 45


Re: [squid-users] Need help to build my own external help

2012-04-16 Thread Mohamed Amine Kadimi
Hi,

I reduced my program to that:

##
#include stdio.h
#include stdlib.h
#include string.h
#define MAX_INPUT 8192
int main(int argc, char **argv)
{
   char request [MAX_INPUT];
   while (fgets(request, MAX_INPUT , stdin) != NULL)
   {
   printf (OK\n);
   }
   return 0;
}

##

But, I still get the same problem,


2012/4/12 Mohamed Amine Kadimi amine.kad...@gmail.com:
 Problem solved partially by moving the executable to /var/lib/squid. I
 no longer get the errors in cache.log.

 however the browser and squidclient are unable to get a page from
 internet, they are trying to infinity and there is no error reported.

 2012/4/12 Mohamed Amine Kadimi amine.kad...@gmail.com:
 2012/4/11 Amos Jeffries squ...@treenet.co.nz:
 On 12.04.2012 06:12, Mohamed Amine Kadimi wrote:

 2012/4/10 Amos Jeffries squ...@treenet.co.nz:

 On 11.04.2012 03:27, Mohamed Amine Kadimi wrote:


 Hello,

 I'm trying to make an external helper which will be called by an acl,
 so I have created one which is very simple: it takes an IP in stdin
 and returns OK if it maches a predefined IP.

 It works when I test it from the CLI, however when I put the relevant
 directives in the squid.conf file and restart squid the connection to
 internet is no longer possible.

 The message displayed by FF is : Firefox is configured to use a proxy
 server that is refusing connections.



 It would seem Squid is not listening on the IP:port which Firefox is
 trying
 to use, or a firewall is actively rejecting port 3128 TCP connections.

 1) check that squid is running okay. It should be fine if your helper
 runs
 okay on command line, but read+execute access permission differences
 between
 the squids user and your own user account can still cause problems. Run
 squid -k parse or look in cache.log for message if Squid is not
 starting.

 2) check that port 3128 is accessible. telnet etc can be used here. A
 packet
 dump may be needed to find which device is rejecting TCP packets to port
 3128.


 It's not a connectivity issue since Squid is working fine unless I
 uncomment the lines relevant to my external helper.

 I noticed some errors I didn't understand in the cache.log:

 ###
 2012/04/11 17:56:19| Accepting  HTTP connections at [::]:3128, FD 24.
 2012/04/11 17:56:19| HTCP Disabled.
 2012/04/11 17:56:19| Squid modules loaded: 0
 2012/04/11 17:56:19| Adaptation support is off.
 2012/04/11 17:56:19| Ready to serve requests.
 2012/04/11 17:56:19| WARNING: src_ip_ext #1 (FD 10) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #4 (FD 16) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #2 (FD 12) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #3 (FD 14) exited


 These causing 


 2012/04/11 17:56:19| Too few src_ip_ext processes are running
 2012/04/11 17:56:19| storeDirWriteCleanLogs: Starting...
 2012/04/11 17:56:19|   Finished.  Wrote 0 entries.
 2012/04/11 17:56:19|   Took 0.00 seconds (  0.00 entries/sec).
 FATAL: The src_ip_ext helpers are crashing too rapidly, need help!


 ... this ...



 Squid Cache (Version 3.1.6): Terminated abnormally.


 ... resulting in the proxy being shutdown. ie (1).


 ###

 I think I'll need to review my program.


 Hmm. The only thing that looks like it might cause issues is fopen() for the
 debug log.


 I've rewritten the source code excluding fopen() and handling
 concurrency but I still get the same problem.

 Here's the new one:

 #include stdio.h
 #include stdlib.h
 #include string.h

 #define MAX_INPUT 8192

 int main(int argc, char **argv)
 {
    char request [MAX_INPUT];

    while (fgets(request, MAX_INPUT , stdin) != NULL)
    {
        const char *channel_id = strtok(request,  );
        char *detail = strtok(NULL, \n);

        if (detail == NULL)
        {
 // Only 1 paramater supplied. We are expecting at least 2 (including
 the channel ID)
            fprintf(stderr, FATAL: %s is concurrent and requires the
 concurrency option to be specified.\n, program_name);
            exit(1);
        }

        if (strcmp(detail,172.30.30.1)==0) printf (%s OK\n,channel_id);
        else printf (%s ERR\n,channel_id);
    }
    return 0;
 }



 --
 Mohamed Amine Kadimi

 Tél     : +212 (0) 675 72 36 45



-- 
Mohamed Amine Kadimi

Tél     : +212 (0) 675 72 36 45


Re: [squid-users] Need help to build my own external help

2012-04-16 Thread John Doe
From: Mohamed Amine Kadimi amine.kad...@gmail.com

 I reduced my program to that:
 
 #include stdio.h
 #include stdlib.h
 #include string.h
 #define MAX_INPUT 8192
 int main(int argc, char **argv)
 {
    char request [MAX_INPUT];
    while (fgets(request, MAX_INPUT , stdin) != NULL)
    {
        printf (OK\n);
    }
    return 0;
 }
 
 But, I still get the same problem,

I jump in the middle of the conversation but, 
the return will constantly end the helper...
It is supposed to loop forever.
I used to use this:

#define INPUTSIZE 8192 char input[INPUTSIZE]; while (fgets(input, 
sizeof(input), stdin)) { if ((cp=strchr(input, '\n')) == NULL) { 
fprintf(stderr, filter: input too big: %s\n, input); } else { *cp = '\0'; } 
... fflush(stderr); fflush(stdout); }

JD


Re: [squid-users] Need help to build my own external help

2012-04-16 Thread Mohamed Amine Kadimi
 I reduced my program to that:

 #include stdio.h
 #include stdlib.h
 #include string.h
 #define MAX_INPUT 8192
 int main(int argc, char **argv)
 {
    char request [MAX_INPUT];
    while (fgets(request, MAX_INPUT , stdin) != NULL)
    {
        printf (OK\n);
    }
    return 0;
 }

 But, I still get the same problem,

 I jump in the middle of the conversation but,
 the return will constantly end the helper...
 It is supposed to loop forever.
 I used to use this:

 #define INPUTSIZE 8192 char input[INPUTSIZE]; while (fgets(input, 
 sizeof(input), stdin)) { if ((cp=strchr(input, '\n')) == NULL) { 
 fprintf(stderr, filter: input too big: %s\n, input); } else { *cp = '\0'; } 
 ... fflush(stderr); fflush(stdout); }

 JD

Actually, it should loop forever because the return is outside the
while (fgets(...) != NULL) and fgets is supposed to not return NULL
unless some error occurs.

Also refer to the source code of ext_session_acl which has a return 0
at the end.


-- 
Mohamed Amine Kadimi

Tél     : +212 (0) 675 72 36 45


Re: [squid-users] Need help to build my own external help

2012-04-16 Thread Amos Jeffries

On 17.04.2012 03:42, Mohamed Amine Kadimi wrote:

Hi,

I reduced my program to that:

##
#include stdio.h
#include stdlib.h
#include string.h
#define MAX_INPUT 8192
int main(int argc, char **argv)
{
   char request [MAX_INPUT];
   while (fgets(request, MAX_INPUT , stdin) != NULL)
   {
   printf (OK\n);
   }
   return 0;
}

##

But, I still get the same problem,


Then it is not Squid or the helper code.


There is some external factor preventing Squid either starting, or 
using stdin/stdout to the helper.



Unless you are cross-compiling the helper somehow?


Amos



Re: [squid-users] Need help for ACL: Authentication web Form + Cookies

2012-04-14 Thread Amos Jeffries

On 14/04/2012 6:08 a.m., David Touzeau wrote:

Dear all

I would like to use 2 external helpers in order to use a web 
authentication form


The deal is to use combination of  ext_session_acl and my own external 
helper

But i did not know how to create the ACLs

I have done 50%
---
external_acl_type checkauth concurrency=100 ttl=3 %SRC %URI %{Host} 
%{Cookie} /usr/bin/squid-helper.php


Note that Cookie: headers can get very large. Squid permits up to 64KB 
before stripping them, which has been spotted happening.



external_acl_type AuthenticatedSessions ttl=60 concurrency=100 %SRC 
/usr/local/sbin/squid/ext_session_acl -t 48000 -b 
/var/lib/squid/session-web-form.db

acl AuthenticatedHelper external checkauth
acl Authenticated_users external AuthenticatedSessions
deny_info http://10.10.10.10/login.php checkauth
http_access deny !AuthenticatedHelper

In this model the squid-helper.php checks the cookie sended by the 
http://10.10.10.10/login.php page.

If cookie exists then squid-helper.php answer OK
if the request is http://10.10.10.10/login.php the squid-helper.php 
answer OK in order to allow the authentication web page.
if cookie does not exists then squid-helper.php answer ERR and the 
login.php page is in charge to authenticate the user and create the 
new cookie


The problem with this is when the user try to connect to an other 
website, the cookie does not exists.
The squid-helper.php answer ERR and requests are returned back to the 
login page.


To make this done to 100% i need to force squid to identifiy the user 
after a squid-helper.php positive answer.

I thinking about using the session helper ( AuthenticatedSessions acl )
If the request pass AuthenticatedHelper  acl and if the request is not 
in the Authenticated_users acl  then a session is created and squid 
process the request.
if the  request pass AuthenticatedHelper and pass Authenticated_users 
then squid process the request.


Is there a more/proper /simple way ?


There is no proper way. HTTP is stateless messaging. Session is stateful 
transaction stream.


By all means use your helper to collect some data, but store it in a 
database accessible to Squid, not a Cookie.

The session helper in active mode maintains one such local database.



How to merge the 2 helpers in order to make it work ?


Have your login script create an entry in 
/var/lib/squid/session-web-form.db. You may need to update to a session 
helper which supports the 4.x+ Berkley database format for multiple access.


NP: I'm also going to post a different session helper soon to squid-dev 
which can use other database types, and supply credentials for Squid 
logging.


Amos


Re: [squid-users] Need help for ACL: Authentication web Form + Cookies

2012-04-14 Thread David Touzeau

Thanks Amos

That should be very cool ! especially MySQL


Le 14/04/2012 09:11, Amos Jeffries a écrit :

On 14/04/2012 6:08 a.m., David Touzeau wrote:

Dear all

I would like to use 2 external helpers in order to use a web 
authentication form


The deal is to use combination of  ext_session_acl and my own 
external helper

But i did not know how to create the ACLs

I have done 50%
---
external_acl_type checkauth concurrency=100 ttl=3 %SRC %URI %{Host} 
%{Cookie} /usr/bin/squid-helper.php


Note that Cookie: headers can get very large. Squid permits up to 64KB 
before stripping them, which has been spotted happening.



external_acl_type AuthenticatedSessions ttl=60 concurrency=100 %SRC 
/usr/local/sbin/squid/ext_session_acl -t 48000 -b 
/var/lib/squid/session-web-form.db

acl AuthenticatedHelper external checkauth
acl Authenticated_users external AuthenticatedSessions
deny_info http://10.10.10.10/login.php checkauth
http_access deny !AuthenticatedHelper

In this model the squid-helper.php checks the cookie sended by the 
http://10.10.10.10/login.php page.

If cookie exists then squid-helper.php answer OK
if the request is http://10.10.10.10/login.php the squid-helper.php 
answer OK in order to allow the authentication web page.
if cookie does not exists then squid-helper.php answer ERR and the 
login.php page is in charge to authenticate the user and create the 
new cookie


The problem with this is when the user try to connect to an other 
website, the cookie does not exists.
The squid-helper.php answer ERR and requests are returned back to the 
login page.


To make this done to 100% i need to force squid to identifiy the user 
after a squid-helper.php positive answer.
I thinking about using the session helper ( AuthenticatedSessions 
acl )
If the request pass AuthenticatedHelper  acl and if the request is 
not in the Authenticated_users acl  then a session is created and 
squid process the request.
if the  request pass AuthenticatedHelper and pass Authenticated_users 
then squid process the request.


Is there a more/proper /simple way ?


There is no proper way. HTTP is stateless messaging. Session is 
stateful transaction stream.


By all means use your helper to collect some data, but store it in a 
database accessible to Squid, not a Cookie.

The session helper in active mode maintains one such local database.



How to merge the 2 helpers in order to make it work ?


Have your login script create an entry in 
/var/lib/squid/session-web-form.db. You may need to update to a 
session helper which supports the 4.x+ Berkley database format for 
multiple access.


NP: I'm also going to post a different session helper soon to 
squid-dev which can use other database types, and supply credentials 
for Squid logging.


Amos



Re: [squid-users] Need help to build my own external help

2012-04-12 Thread Mohamed Amine Kadimi
2012/4/11 Amos Jeffries squ...@treenet.co.nz:
 On 12.04.2012 06:12, Mohamed Amine Kadimi wrote:

 2012/4/10 Amos Jeffries squ...@treenet.co.nz:

 On 11.04.2012 03:27, Mohamed Amine Kadimi wrote:


 Hello,

 I'm trying to make an external helper which will be called by an acl,
 so I have created one which is very simple: it takes an IP in stdin
 and returns OK if it maches a predefined IP.

 It works when I test it from the CLI, however when I put the relevant
 directives in the squid.conf file and restart squid the connection to
 internet is no longer possible.

 The message displayed by FF is : Firefox is configured to use a proxy
 server that is refusing connections.



 It would seem Squid is not listening on the IP:port which Firefox is
 trying
 to use, or a firewall is actively rejecting port 3128 TCP connections.

 1) check that squid is running okay. It should be fine if your helper
 runs
 okay on command line, but read+execute access permission differences
 between
 the squids user and your own user account can still cause problems. Run
 squid -k parse or look in cache.log for message if Squid is not
 starting.

 2) check that port 3128 is accessible. telnet etc can be used here. A
 packet
 dump may be needed to find which device is rejecting TCP packets to port
 3128.


 It's not a connectivity issue since Squid is working fine unless I
 uncomment the lines relevant to my external helper.

 I noticed some errors I didn't understand in the cache.log:

 ###
 2012/04/11 17:56:19| Accepting  HTTP connections at [::]:3128, FD 24.
 2012/04/11 17:56:19| HTCP Disabled.
 2012/04/11 17:56:19| Squid modules loaded: 0
 2012/04/11 17:56:19| Adaptation support is off.
 2012/04/11 17:56:19| Ready to serve requests.
 2012/04/11 17:56:19| WARNING: src_ip_ext #1 (FD 10) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #4 (FD 16) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #2 (FD 12) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #3 (FD 14) exited


 These causing 


 2012/04/11 17:56:19| Too few src_ip_ext processes are running
 2012/04/11 17:56:19| storeDirWriteCleanLogs: Starting...
 2012/04/11 17:56:19|   Finished.  Wrote 0 entries.
 2012/04/11 17:56:19|   Took 0.00 seconds (  0.00 entries/sec).
 FATAL: The src_ip_ext helpers are crashing too rapidly, need help!


 ... this ...



 Squid Cache (Version 3.1.6): Terminated abnormally.


 ... resulting in the proxy being shutdown. ie (1).


 ###

 I think I'll need to review my program.


 Hmm. The only thing that looks like it might cause issues is fopen() for the
 debug log.


I've rewritten the source code excluding fopen() and handling
concurrency but I still get the same problem.

Here's the new one:

#include stdio.h
#include stdlib.h
#include string.h

#define MAX_INPUT 8192

int main(int argc, char **argv)
{
char request [MAX_INPUT];

while (fgets(request, MAX_INPUT , stdin) != NULL)
{
const char *channel_id = strtok(request,  );
char *detail = strtok(NULL, \n);

if (detail == NULL)
{
// Only 1 paramater supplied. We are expecting at least 2 (including
the channel ID)
fprintf(stderr, FATAL: %s is concurrent and requires the
concurrency option to be specified.\n, program_name);
exit(1);
}

if (strcmp(detail,172.30.30.1)==0) printf (%s OK\n,channel_id);
else printf (%s ERR\n,channel_id);
}
return 0;
}


Re: [squid-users] Need help to build my own external help

2012-04-12 Thread Mohamed Amine Kadimi
Problem solved partially by moving the executable to /var/lib/squid. I
no longer get the errors in cache.log.

however the browser and squidclient are unable to get a page from
internet, they are trying to infinity and there is no error reported.

2012/4/12 Mohamed Amine Kadimi amine.kad...@gmail.com:
 2012/4/11 Amos Jeffries squ...@treenet.co.nz:
 On 12.04.2012 06:12, Mohamed Amine Kadimi wrote:

 2012/4/10 Amos Jeffries squ...@treenet.co.nz:

 On 11.04.2012 03:27, Mohamed Amine Kadimi wrote:


 Hello,

 I'm trying to make an external helper which will be called by an acl,
 so I have created one which is very simple: it takes an IP in stdin
 and returns OK if it maches a predefined IP.

 It works when I test it from the CLI, however when I put the relevant
 directives in the squid.conf file and restart squid the connection to
 internet is no longer possible.

 The message displayed by FF is : Firefox is configured to use a proxy
 server that is refusing connections.



 It would seem Squid is not listening on the IP:port which Firefox is
 trying
 to use, or a firewall is actively rejecting port 3128 TCP connections.

 1) check that squid is running okay. It should be fine if your helper
 runs
 okay on command line, but read+execute access permission differences
 between
 the squids user and your own user account can still cause problems. Run
 squid -k parse or look in cache.log for message if Squid is not
 starting.

 2) check that port 3128 is accessible. telnet etc can be used here. A
 packet
 dump may be needed to find which device is rejecting TCP packets to port
 3128.


 It's not a connectivity issue since Squid is working fine unless I
 uncomment the lines relevant to my external helper.

 I noticed some errors I didn't understand in the cache.log:

 ###
 2012/04/11 17:56:19| Accepting  HTTP connections at [::]:3128, FD 24.
 2012/04/11 17:56:19| HTCP Disabled.
 2012/04/11 17:56:19| Squid modules loaded: 0
 2012/04/11 17:56:19| Adaptation support is off.
 2012/04/11 17:56:19| Ready to serve requests.
 2012/04/11 17:56:19| WARNING: src_ip_ext #1 (FD 10) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #4 (FD 16) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #2 (FD 12) exited
 2012/04/11 17:56:19| WARNING: src_ip_ext #3 (FD 14) exited


 These causing 


 2012/04/11 17:56:19| Too few src_ip_ext processes are running
 2012/04/11 17:56:19| storeDirWriteCleanLogs: Starting...
 2012/04/11 17:56:19|   Finished.  Wrote 0 entries.
 2012/04/11 17:56:19|   Took 0.00 seconds (  0.00 entries/sec).
 FATAL: The src_ip_ext helpers are crashing too rapidly, need help!


 ... this ...



 Squid Cache (Version 3.1.6): Terminated abnormally.


 ... resulting in the proxy being shutdown. ie (1).


 ###

 I think I'll need to review my program.


 Hmm. The only thing that looks like it might cause issues is fopen() for the
 debug log.


 I've rewritten the source code excluding fopen() and handling
 concurrency but I still get the same problem.

 Here's the new one:

 #include stdio.h
 #include stdlib.h
 #include string.h

 #define MAX_INPUT 8192

 int main(int argc, char **argv)
 {
    char request [MAX_INPUT];

    while (fgets(request, MAX_INPUT , stdin) != NULL)
    {
        const char *channel_id = strtok(request,  );
        char *detail = strtok(NULL, \n);

        if (detail == NULL)
        {
 // Only 1 paramater supplied. We are expecting at least 2 (including
 the channel ID)
            fprintf(stderr, FATAL: %s is concurrent and requires the
 concurrency option to be specified.\n, program_name);
            exit(1);
        }

        if (strcmp(detail,172.30.30.1)==0) printf (%s OK\n,channel_id);
        else printf (%s ERR\n,channel_id);
    }
    return 0;
 }



-- 
Mohamed Amine Kadimi

Tél     : +212 (0) 675 72 36 45


Re: [squid-users] Need help to build my own external help

2012-04-11 Thread Mohamed Amine Kadimi
2012/4/10 Amos Jeffries squ...@treenet.co.nz:
 On 11.04.2012 03:27, Mohamed Amine Kadimi wrote:

 Hello,

 I'm trying to make an external helper which will be called by an acl,
 so I have created one which is very simple: it takes an IP in stdin
 and returns OK if it maches a predefined IP.

 It works when I test it from the CLI, however when I put the relevant
 directives in the squid.conf file and restart squid the connection to
 internet is no longer possible.

 The message displayed by FF is : Firefox is configured to use a proxy
 server that is refusing connections.


 It would seem Squid is not listening on the IP:port which Firefox is trying
 to use, or a firewall is actively rejecting port 3128 TCP connections.

 1) check that squid is running okay. It should be fine if your helper runs
 okay on command line, but read+execute access permission differences between
 the squids user and your own user account can still cause problems. Run
 squid -k parse or look in cache.log for message if Squid is not starting.

 2) check that port 3128 is accessible. telnet etc can be used here. A packet
 dump may be needed to find which device is rejecting TCP packets to port
 3128.


It's not a connectivity issue since Squid is working fine unless I
uncomment the lines relevant to my external helper.

I noticed some errors I didn't understand in the cache.log:

###
2012/04/11 17:56:19| Accepting  HTTP connections at [::]:3128, FD 24.
2012/04/11 17:56:19| HTCP Disabled.
2012/04/11 17:56:19| Squid modules loaded: 0
2012/04/11 17:56:19| Adaptation support is off.
2012/04/11 17:56:19| Ready to serve requests.
2012/04/11 17:56:19| WARNING: src_ip_ext #1 (FD 10) exited
2012/04/11 17:56:19| WARNING: src_ip_ext #4 (FD 16) exited
2012/04/11 17:56:19| WARNING: src_ip_ext #2 (FD 12) exited
2012/04/11 17:56:19| WARNING: src_ip_ext #3 (FD 14) exited
2012/04/11 17:56:19| Too few src_ip_ext processes are running
2012/04/11 17:56:19| storeDirWriteCleanLogs: Starting...
2012/04/11 17:56:19|   Finished.  Wrote 0 entries.
2012/04/11 17:56:19|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: The src_ip_ext helpers are crashing too rapidly, need help!

Squid Cache (Version 3.1.6): Terminated abnormally.
###

I think I'll need to review my program.

 #include stdio.h
 #include stdlib.h
 #include string.h

 #define MAX_INPUT 256


 HINT: input buffer from Squid is usually between 4KB-8KB, but can be larger
 (~32KB for 3.1/3.2). IP address has a limited range of text representations,
 but if you pass unconstrained details like URLs or HTTP headers values to
 this helper it can trend towards the larger sizes. In which case it is
 useful to check whether the \n was received after fgets() and handle very
 long lines as a special input case.


Why is the input size so large? Could I not limit it if I just send
%SRC and %LOGIN


 int main()
 {
char request [MAX_INPUT];  /* this is a holder for the stdin request */

/* below file is just to track execution of the script */
FILE *fp;
fp = fopen(file.txt,a);
fprintf(fp,%s\n,This is an execution); /*append some text*/
fclose(fp);


while (fgets(request, MAX_INPUT, stdin) != NULL){

const char *index;
index = strtok(request,  \n);  /* this is to get rid of \n */


 NOTE: long-term you will want to add concurrency support. It is much faster
 than serial queries.

 Check out the squid-3.2 session helper while() loop logics for an example of
 how to pull the channel-ID (any bytes before the first  ) from the input
 before processing. It then just gets sent back to Squid unchanged in the
 printf before OK/ERR.

Sure, I'll be trying to run faster. Is handling the channel-ID in the
input and output of my program all I have to do to support
concurrency?

Thanks,


--
Mohamed Amine Kadimi

Tél : +212 (0) 675 72 36 45


Re: [squid-users] Need help to build my own external help

2012-04-10 Thread Amos Jeffries

On 11.04.2012 03:27, Mohamed Amine Kadimi wrote:

Hello,

I'm trying to make an external helper which will be called by an acl,
so I have created one which is very simple: it takes an IP in stdin
and returns OK if it maches a predefined IP.

It works when I test it from the CLI, however when I put the relevant
directives in the squid.conf file and restart squid the connection to
internet is no longer possible.

The message displayed by FF is : Firefox is configured to use a 
proxy

server that is refusing connections.


It would seem Squid is not listening on the IP:port which Firefox is 
trying to use, or a firewall is actively rejecting port 3128 TCP 
connections.


1) check that squid is running okay. It should be fine if your helper 
runs okay on command line, but read+execute access permission 
differences between the squids user and your own user account can still 
cause problems. Run squid -k parse or look in cache.log for message if 
Squid is not starting.


2) check that port 3128 is accessible. telnet etc can be used here. A 
packet dump may be needed to find which device is rejecting TCP packets 
to port 3128.



Amos




Here's my squid.conf:


external_acl_type src_ip_ext ttl=1 concurrency=0 %SRC /root/C/srcIP

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl src_ip external src_ip_ext

http_access allow manager localhost
http_access deny manager
#http_access allow localnet
http_access allow src_ip
http_access deny all

http_port 3128


And the source code of the helper:

/*  */
#include stdio.h
#include stdlib.h
#include string.h

#define MAX_INPUT 256


HINT: input buffer from Squid is usually between 4KB-8KB, but can be 
larger (~32KB for 3.1/3.2). IP address has a limited range of text 
representations, but if you pass unconstrained details like URLs or HTTP 
headers values to this helper it can trend towards the larger sizes. In 
which case it is useful to check whether the \n was received after 
fgets() and handle very long lines as a special input case.




int main()
{
char request [MAX_INPUT];  /* this is a holder for the stdin 
request */


/* below file is just to track execution of the script */
FILE *fp;
fp = fopen(file.txt,a);
fprintf(fp,%s\n,This is an execution); /*append some text*/
fclose(fp);


while (fgets(request, MAX_INPUT, stdin) != NULL){

const char *index;
index = strtok(request,  \n);  /* this is to get rid of \n 
*/


NOTE: long-term you will want to add concurrency support. It is much 
faster than serial queries.


Check out the squid-3.2 session helper while() loop logics for an 
example of how to pull the channel-ID (any bytes before the first  ) 
from the input before processing. It then just gets sent back to Squid 
unchanged in the printf before OK/ERR.




if (strcmp (index,172.30.30.1) == 0) {
printf(OK\n);
}
else printf(ERR\n);
}

return 0;
}
/*  */

This is just a proof of concept not the final helper I intend to make
(I know source IP can be controlled directly via ACLs).




Amos


Re: [squid-users] Need help with Parent/Client proxy configuration

2012-02-28 Thread Amos Jeffries

On 29/02/2012 10:57 a.m., Benjamin E. Nichols wrote:

I currently have two networks, one is upstream of the other


192.168.1.x with squid 3.1.16 cache @ 129.168.1.205

and down stream

10.10.1.x network with 10.10.1.105 Squid 3.1.16 Proxy cache


I need to know what I need to ad to the 10.10.1.x proxy config file to 
enable caching from the upstream squid box, and I want both squid 
machines to serve cache.


This is two different and separate things. One being Caching. The second 
being fetching from an upstream peer (aka parent).


To serve cache all you need is a cache in each proxy. If you mean 
sharing cache, that is not possible in an up/down hierarchy relationship.






# 


#Begin Squid Configuration  10.10.1.105
# 



http_port 10.10.1.105:3128
hierarchy_stoplist cgi-bin ?


You can drop hierarchy_stoplist.


cache_mem 500 MB
maximum_object_size_in_memory 150 MB


Er, you can fit 3 of these 150M objects into your 500MB memory cache 
space (cache_mem). If we assume your traffic is about average each one 
of those will shove  more than 10,000 small objects out to disk on 
arrival, which could be very slow.
It would be a good idea to drop your in-memory object size limit to 
permit more small objects to stay there.



maximum_object_size 150 MB
cache_dir ufs /mnt/secondary/var/spool/squid3 14000 32 256


There you go. You have enabled caching.


To fetch from an upstream proxy configure a link to it with cache_peer. 
The relationship type you want is parent.

  http://www.squid-cache.org/Doc/config/cache_peer/


Amos


Re: [squid-users] need help: proxy server forbids port 1863 tunnelling

2011-10-17 Thread Amos Jeffries

On 17/10/11 22:09, owl...@gmail.com wrote:

Hi, jabber tell proxy server forbids port 1863 tunnelling when i try
to connect to pidgin server

Linux squid 2.6.32-5-amd64
Debian 6.0.3
Squid version Version: 2.7.STABLE9-2.1

Thanks


Add 1863 to your SSL_ports ACL to allow past the CONNECT tunnel protection.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.16
  Beta testers wanted for 3.2.0.13


Re: [squid-users] need help: proxy server forbids port 1863 tunnelling

2011-10-17 Thread owl...@gmail.com
Thanks Amos


Re: [squid-users] Need help with multiple web server reverse proxy

2011-07-05 Thread Kinkie
On Tue, Jul 5, 2011 at 9:58 PM, Robin Bonin rbo...@gmail.com wrote:
 I have a squid reverse proxy working for all the domains that I
 specify in the squid.conf file. I would like to add an additional
 default rule, if the domain does not match one of the known domains.

 I am mapping the domains to the particular servers using the following
 config lines.

 cache_peer 10.10.20.15 parent 80 0 no-query no-digest originserver 
 name=lamp_server login=PASS
 acl sites_lamp dstdomain (list of domain names here)
 cache_peer_access lamp_server allow sites_lamp

 is there an additional acl line that I can use for other?

all will do, just place it at the end of your cache_peer_access lines.

-- 
    /kinkie


Re: [squid-users] Need help with multiple web server reverse proxy

2011-07-05 Thread Robin Bonin
My goal is to get a handful of domains redirected to a lamp server and
the rest defaulted to my windows server.

I tried adding all to the windows server cache_peer_access line, then
all traffic went to my windows server. I also tried playing with the
position of that line. Seem like no matter where it is, when I have
all in there, all traffic is redirected there.



On Tue, Jul 5, 2011 at 3:06 PM, Kinkie gkin...@gmail.com wrote:
 On Tue, Jul 5, 2011 at 9:58 PM, Robin Bonin rbo...@gmail.com wrote:
 I have a squid reverse proxy working for all the domains that I
 specify in the squid.conf file. I would like to add an additional
 default rule, if the domain does not match one of the known domains.

 I am mapping the domains to the particular servers using the following
 config lines.

 cache_peer 10.10.20.15 parent 80 0 no-query no-digest originserver 
 name=lamp_server login=PASS
 acl sites_lamp dstdomain (list of domain names here)
 cache_peer_access lamp_server allow sites_lamp

 is there an additional acl line that I can use for other?

 all will do, just place it at the end of your cache_peer_access lines.

 --
     /kinkie



Re: [squid-users] Need help with multiple web server reverse proxy

2011-07-05 Thread Amos Jeffries

On Tue, 5 Jul 2011 17:41:32 -0500, Robin Bonin wrote:
My goal is to get a handful of domains redirected to a lamp server 
and

the rest defaulted to my windows server.

I tried adding all to the windows server cache_peer_access line, then
all traffic went to my windows server. I also tried playing with the
position of that line. Seem like no matter where it is, when I have
all in there, all traffic is redirected there.



Like this:
 cache_peer_access lamp_server allow sites_lamp
 cache_peer_access lamp_server deny all

 cache_peer_access windows_server deny sites_lamp
 cache_peer_access windows_server allow all


Amos



Re: [squid-users] Need help with multiple web server reverse proxy

2011-07-05 Thread Robin Bonin
Thanks, that did it, I appreciate your help.

On Tue, Jul 5, 2011 at 7:27 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Tue, 5 Jul 2011 17:41:32 -0500, Robin Bonin wrote:

 My goal is to get a handful of domains redirected to a lamp server and
 the rest defaulted to my windows server.

 I tried adding all to the windows server cache_peer_access line, then
 all traffic went to my windows server. I also tried playing with the
 position of that line. Seem like no matter where it is, when I have
 all in there, all traffic is redirected there.


 Like this:
  cache_peer_access lamp_server allow sites_lamp
  cache_peer_access lamp_server deny all

  cache_peer_access windows_server deny sites_lamp
  cache_peer_access windows_server allow all


 Amos




Re: [squid-users] Need help configuring squid 3.1.11 to pass Certs

2011-02-24 Thread Amos Jeffries

On 25/02/11 06:32, Martin (Jake) Jacobson wrote:

Hi,

I am trying to build a squid box that will proxy requests to two sites
that require a PKI cert.  The client doesn't have a cert so I want the
squid box to take a request from the client and submit the certs it
has to retrieve the resource.

I was able to build squid 3.1.11 with ssl support and I have a very
basic squid configuration to test.  When I run squid -k parse I see
that squid sees the certs

2011/02/24 17:23:19| Initializing cache_peer akocac SSL context
2011/02/24 17:23:19| Using certificate in /webroot/conf/squid/.ssl/server.crt
2011/02/24 17:23:19| Using private key in /webroot/conf/squid/.ssl/server.key
2011/02/24 17:23:19| NOTICE: Peer certificates are not verified for validity!
2011/02/24 17:23:19| Initializing cache_peer informationassurance SSL context
2011/02/24 17:23:19| Using certificate in /webroot/conf/squid/.ssl/server.crt
2011/02/24 17:23:19| Using private key in /webroot/conf/squid/.ssl/server.key
2011/02/24 17:23:19| NOTICE: Peer certificates are not verified for validity!

BUT when I run squid -Nd1 I don't see any information about using the
certs or private key!!!


Strange,. Check that you do not have another instance of Squid using 
another squid.conf sitting around somewhere.





When squid is running I have tried to

1.  Configure my web browser to use the squid proxy and retrieve a
resource but instead of the Squid certs being passed, I am requested
to use my certs loaded in my browser.


The major browsers pass https:// requests to the proxy for handling 
quite differently to http://.
They only open a CONNECT tunnel instead and do all of the SSL encryption 
inside it themselves.




2.  Telneting to the box and do a GET request for the resouced
   telnet localhost 3128
   Connected to linsrcheval2o.
   Escape character is '^]'.
   GET https://myProtectedSitel/pki/login/external_silent_autologin.jhtml
   HTTP/1.0 403 Forbidden


Well, to point out the obvious that is Forbidden. The test itself if 
not forbidden by the ACLs somewhere should have used the squid 
cache_peer certs.


Find out which software and controls are blocking it and you will have a 
good way to test this setup.




Both cases seem to indicate that squid is not using the PKI cert/key
it has.  Here is my configuration file:

cache_peer protectedSite1 parent 443 0 no-query ssl
sslcert=/webroot/conf/squid/.ssl/server.crt
sslkey=/webroot/conf/squid/.ssl/server.key
sslcapath=/webroot/conf/squid/.ssl/ca/ sslversion=3
sslflags=DONT_VERIFY_PEER originserver proxy-only name=site1



cache_peer protectedSite2 sibling 443 0 no-query no-digest
no-netdb-exchange ssl sslcert=/webroot/conf/squid/.ssl/server.crt
sslkey=/webroot/conf/squid/.ssl/server.key
sslcapath=/webroot/conf/squid/.ssl/ca/ sslversion=3
sslflags=DONT_VERIFY_PEER originserver proxy-only name=site2



Assuming the keys are all correct that looks right for encrypting the 
origin link from Squid.



Let me know if you need anything else and thanks for the help on this.



In order to get the browsers past their tendency for CONNECT you will 
have to setup an http_port with reverse-proxy settings and set the local 
DNS to point browsers at your Squid for that particular site.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] Need Help adding SSL support in squid.conf for 2 of the 4 domains I am caching

2010-12-14 Thread Amos Jeffries
On Tue, 14 Dec 2010 14:28:06 -0500, Raymond Leonard rclch...@gmail.com
wrote:
 Hello all,
 
 I have a working squid.conf that allows me to access im01.cppt.com,
 and img02.cppt.com. I have been tasked
 to add ssl support so that the content can be accessed via http and
https.
 
 Here is my working squid.conf--
 

--
 http_port 80 accel defaultsite=img01.cppt.com vhost
 cache_peer 172.19.23.91 parent 80 0 no-query originserver name=myAccel
 cache_peer 172.19.23.92 parent 80 0 no-query originserver name=server_2
 cache_peer 172.19.23.95 parent 80 0 no-query originserver
name=myAccel_bu
 cache_peer 172.19.23.12 parent 80 0 no-query originserver
name=server_2_bu
 
 acl all src 0.0.0.0/0.0.0.0
 acl our_sites dstdomain img01.cppt.com
 acl sites_server_2 dstdomain img02.cppt.com
 acl our_sites3 dstdomain image1.emktg.com
 acl our_sites4 dstdomain image2.emktg.com
 
 http_access allow our_sites
 http_access allow sites_server_2
 http_access allow our_sites3
 http_access allow our_sites4
 
 cache_peer_access myAccel allow our_sites
 cache_peer_access myAccel_bu allow our_sites
 cache_peer_access server_2 allow sites_server_2
 cache_peer_access server_2 allow our_sites3
 cache_peer_access server_2 allow our_sites4
 cache_peer_access server_2_bu allow sites_server_2
 cache_peer_access server_2_bu allow our_sites3
 cache_peer_access server_2_bu allow our_sites4

-
 
 
 I have created the wild card certificate on the squid server. Just was
 wondering
 if someone could help with my new squid.conf file to accomplish this.
 Here is what I have done thus far--
 

Your spec above says via http and https there fore keep the old config.
Add the HTTPS bits into it. step by step.

Step 1) port to accept traffic.

 
 ---
 
 https_port 443 cert=/usr/newrprgate/CertAuth/testcert.cert
 key=/usr/newrprgate/CertAuth/testkey.pem default
 defaultsite=img01.cppt.com vhost

Slightly altered:

 http3_port 443 accel defaultsite=img01.cppt.com vhost
   cert=/usr/newrprgate/CertAuth/testcert.cert
   key=/usr/newrprgate/CertAuth/testkey.pem

and place it next to the existing http_port entry. (I've wrapped for
brevity, its one long line)

This can be done by itself, no other changes. When the certs work the site
should be contactable via https:// immediately. The proxy-origin traffic
will still be HTTP-only but the public side should be fully working HTTPS
to the proxy. Test this and make sure it works before going any more
complicated.

 
 cache_peer 172.19.23.91 parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=myAccelsecure
 cache_peer 172.19.23.92 parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=server_2secure
 
 cache_peer 172.19.23.91 parent 80 0 no-query originserver name=myAccel
 cache_peer 172.19.23.92 parent 80 0 no-query originserver name=server_2
 cache_peer 172.19.23.95 parent 80 0 no-query originserver
name=myAccel_bu
 cache_peer 172.19.23.12 parent 80 0 no-query originserver
name=server_2_bu
 
 acl all src 0.0.0.0/0.0.0.0
 acl our_sitessecure dstdomain img01.cppt.com
 acl sites_server_2secure dstdomain img02.cppt.com
 acl our_sites dstdomain img01.cppt.com
 acl sites_server_2 dstdomain img02.cppt.com
 acl our_sites3 dstdomain image.emktg.com
 acl our_sites4 dstdomain image4.emktg.com
 

No need for the our_sites*secure* variant rules. They duplicate the
earlier definition. Only used internally to the config file so the old
definition can be re-used.

 
 http_access allow our_sitessecure
 http_access allow sites_server_2secure
 
 http_access allow our_sites
 http_access allow sites_server_2
 http_access allow our_sites3
 http_access allow our_sites4
 
 
 
 cache_peer_access myAccelsecure allow our_sitesecure
 cache_peer_access server_2secure allow sites_server_2secure
 
 cache_peer_access myAccel allow our_sites
 cache_peer_access myAccel_bu allow our_sites
 cache_peer_access server_2 allow sites_server_2
 cache_peer_access server_2 allow our_sites3
 cache_peer_access server_2 allow our_sites4
 cache_peer_access server_2_bu allow sites_server_2
 cache_peer_access server_2_bu allow our_sites3
 cache_peer_access server_2_bu allow our_sites4
 -
 
 Any help is much appreciated. Thanks for looking!

Question:
  does it matter if HTTPS traffic to the proxy goes over HTTP links back
to the origin server or vice versa?

If not you can drop half the cache_peer links and use the originals, or
convert them to SSL links. It is a simpler and more easily maintained
config if all the traffic back to origins can share links.

Otherwise you will need to add an proto HTTPS ACL to the config and lock
down which protocol goes where in addition to the domain names.

Amos


Re: [squid-users] need help port 80

2010-04-13 Thread Amos Jeffries

da...@lafourmi.de wrote:

hello squid freinds,

i use squid 3.1.0.16
i have to block port 80 for all applications
only firefox should have access!

how can i realized that with squid?

thanks for help


Firstly, the 3.1 betas are now obsolete, please move up to 3.1.1.
Beta 16 particularly had some IP handling issues you want to get away from.

ACL control type browser matches the User-Agent string. Be aware that 
it is trivial for any other software or even manual traffic to forge the 
UA header.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] need help port 80

2010-04-13 Thread davsigh
Hi
u can use it only for squid and stop the httpd server which use bydefault 80 
port and if u want apache then change apache port.
--Original Message--
From: da...@lafourmi.de
To: squid-users-h...@squid-cache.org
To: squid-users@squid-cache.org
To: Amos Jeffries
Subject: [squid-users] need help port 80
Sent: Apr 13, 2010 5:07 PM

hello squid freinds,

i use squid 3.1.0.16
i have to block port 80 for all applications
only firefox should have access!

how can i realized that with squid?

thanks for help


Sent from BlackBerry® on Airtel

Re: [squid-users] need help port 80

2010-04-13 Thread da...@lafourmi.de

davs...@gmail.com schrieb:

Hi
u can use it only for squid and stop the httpd server which use bydefault 80 
port and if u want apache then change apache port.
--Original Message--
From: da...@lafourmi.de
To: squid-users-h...@squid-cache.org
To: squid-users@squid-cache.org
To: Amos Jeffries
Subject: [squid-users] need help port 80
Sent: Apr 13, 2010 5:07 PM

hello squid freinds,

i use squid 3.1.0.16
i have to block port 80 for all applications
only firefox should have access!

how can i realized that with squid?

thanks for help


Sent from BlackBerry® on Airtel

f.e.
i change apache port to port 60 and in squid.conf i change http port to 
60 and then all applications blocked accept firefox?

its correct?
thanks

--
___
David C. Heitmann
Systemadministration

email: da...@lafourmi.de
www.lafourmi.de

lafourmi postproduction GmbH
Schulterblatt 58 / Haus C
D-20357 Hamburg
Tel. 040 – 4321 677 – 00
Fax  040 – 4321 677 – 07

Geschäftsführer: Florian Bruchhäuser, Sascha Schmidt
Prokuristin: Rebekka Schmidt
Die Gesellschaft ist eingetragen im Handelsregister des
Amtsgerichts Hamburg unter der Nummer HR B 99367
Steuernummer: 02/858/00781
___
For legal and security reasons the information provided in this e-mail
is not legally binding. Upon request we would be pleased to provide you
with a legally binding confirmation in written form. Any form of
unauthorized use, publication, reproduction, copying or disclosure of
the content of this e-mail is not permitted. This message is exclusively
for the person addressed or their representative. If you are not the
intended recipient of this message and its contents, please notify the
sender immediately.
___
Aus Rechts- und Sicherheitsgruenden ist die in dieser E-Mail gegebene
Information nicht rechtsverbindlich. Eine rechtsverbindliche
Bestaetigung reichen wir Ihnen gerne auf Anforderung in schriftlicher
Form nach. Beachten Sie bitte, dass jede Form der unautorisierten
Nutzung, Veroeffentlichung, Vervielfaeltigung oder Weitergabe des
Inhalts dieser E-Mail nicht gestattet ist. Diese Nachricht ist
ausschliesslich fuer den bezeichneten Adressaten oder dessen Vertreter
bestimmt. Sollten Sie nicht der vorgesehene Adressat dieser E-Mail oder
dessen Vertreter sein, so bitten wir Sie, sich mit dem Absender der
E-Mail in Verbindung zu setzen.






Re: [squid-users] need help port 80

2010-04-13 Thread John Doe
From: da...@lafourmi.de da...@lafourmi.de
 i have to block port 80 for all applications
 only firefox should have access!
 how can i realized that with squid?

From the default config file:
#   acl aclname browser  [-i] regexp ...
# # pattern match on User-Agent header (see also req_header below)

JD


  


Re: [squid-users] need help port 80

2010-04-13 Thread da...@lafourmi.de

Ohhh thanks john doe

its really cool that it gives one soltuion for that ;)
with manipulate the ports for apache, is the worst case for me because 
squidvir dont worked with that :(



but i dont understand
regexp  pattern match on user agent

can you give me an example for dummies please ;)
THANKS forward
dave




John Doe schrieb:

From: da...@lafourmi.de da...@lafourmi.de
  

i have to block port 80 for all applications
only firefox should have access!
how can i realized that with squid?



From the default config file:
#   acl aclname browser  [-i] regexp ...
# # pattern match on User-Agent header (see also req_header below)

JD


  

  


Re: [squid-users] need help.....

2010-02-11 Thread Chris Robertson

David C. Heitmann wrote:

how can i block msn live messenger or icq


1) Craft an Acceptable Use Policy with the provision that Internet 
access requires your customers to refrain from using chat programs.  Add 
an appropriate penalty (loss of internet access, posting of names to a 
hall of shame, etc.) for violations.
2) (Optional) Warn users who violate this policy and direct their 
attention to the penalty.

3) Enforce the AUP.



with this configurations in squid i have no success :(


# MSN Messenger

acl msn urlpath_regex -i gateway.dll
acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
acl msn1 req_mime_type application/x-msn-messenger

http_access deny msnd
http_access deny msn
http_access deny msn1



# ICQ

acl icq dstdomain .icq.com

http_access deny icq



i have no iptables


Are you saying you have no method of limiting your outbound traffic?  
Not much Squid can do if it's not involved.



and squid version 3.1.0.16

thanks to regard
dave


Chris



Re: [squid-users] Need help with insert code into html body

2009-12-01 Thread Ronaldo Zhou
Hi,

   I read more and found one ICAP project named GreasySpoon (
http://greasyspoon.sourceforge.net/index.html ). I'll try this and
update here later.

Thanks,

On Sun, Nov 29, 2009 at 7:26 PM, lan messerschmidt
lan.messerschm...@gmail.com wrote:
 On Sun, Nov 29, 2009 at 4:45 PM, Ronaldo Zhou ronaldo.z...@gmail.com wrote:
 Hi everyone,

   I need you help with squid to insert some code to html body
 returned to clients, for analytic reasons.



 Squid is designed that NO change to original HTTP response body,
 though you can insert some additional headers.
 Also eCap is squid-3.1's new feature.



Re: [squid-users] Need help with insert code into html body

2009-12-01 Thread Brian Mearns
Sorry, wrong reply address:

On Tue, Dec 1, 2009 at 8:44 AM, Brian Mearns mearn...@gmail.com wrote:
 On Tue, Dec 1, 2009 at 3:02 AM, Ronaldo Zhou ronaldo.z...@gmail.com wrote:
 Hi,

   I read more and found one ICAP project named GreasySpoon (
 http://greasyspoon.sourceforge.net/index.html ). I'll try this and
 update here later.

 Thanks,

 On Sun, Nov 29, 2009 at 7:26 PM, lan messerschmidt
 lan.messerschm...@gmail.com wrote:
 On Sun, Nov 29, 2009 at 4:45 PM, Ronaldo Zhou ronaldo.z...@gmail.com 
 wrote:
 Hi everyone,

   I need you help with squid to insert some code to html body
 returned to clients, for analytic reasons.



 Squid is designed that NO change to original HTTP response body,
 though you can insert some additional headers.
 Also eCap is squid-3.1's new feature.



 Privoxy might be another route. While squid is designed to be a
 caching proxy and may have 3rd party modules or patches to make it
 change the content, Privoxy is designed from the start to modify
 content in certain ways. However, as with the other solutions
 mentioned in this thread, this is meant primarily for removing target
 content (typically images) more than straight out modification, and as
 Amos very nicely illustrated, there is a lot of dirtiness involved in
 content modification, especially if you're doing it transparently.

 -Brian


-- 
Feel free to contact me using PGP Encryption:
Key Id: 0x3AA70848
Available from: http://keys.gnupg.net


Re: [squid-users] Need help with insert code into html body

2009-11-29 Thread lan messerschmidt
On Sun, Nov 29, 2009 at 4:45 PM, Ronaldo Zhou ronaldo.z...@gmail.com wrote:
 Hi everyone,

   I need you help with squid to insert some code to html body
 returned to clients, for analytic reasons.



Squid is designed that NO change to original HTTP response body,
though you can insert some additional headers.
Also eCap is squid-3.1's new feature.


RE: [squid-users] Need help with insert code into html body

2009-11-29 Thread Mike Marchywka










 Date: Sun, 29 Nov 2009 19:26:47 +0800
 From: lan.messerschm...@gmail.com
 To: ronaldo.z...@gmail.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Need help with insert code into html body

 On Sun, Nov 29, 2009 at 4:45 PM, Ronaldo Zhou  wrote:
 Hi everyone,

   I need you help with squid to insert some code to html body
 returned to clients, for analytic reasons.



 Squid is designed that NO change to original HTTP response body,
 though you can insert some additional headers.

How does compressin work? I thought someone said you could compress
results in some squid versions? Also, I've been playing with the ad zapper
and it clearly modifies results. There is no reason the modified file can't
get the target html and do something with it.



 Also eCap is squid-3.1's new feature.


  
_
Windows 7: I wanted simpler, now it's simpler. I'm a rock star.
http://www.microsoft.com/Windows/windows-7/default.aspx?h=myidea?ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_myidea:112009

Re: [squid-users] Need help with insert code into html body

2009-11-29 Thread lan messerschmidt
On Sun, Nov 29, 2009 at 8:28 PM, Mike Marchywka marchy...@hotmail.com wrote:


 How does compressin work? I thought someone said you could compress
 results in some squid versions?

The official squid doesn't do that compression, it's maybe other
organization's release, or with an external module.
You could modify the source and implement that, go on, nobody will blame you.


RE: [squid-users] Need help with insert code into html body

2009-11-29 Thread Mike Marchywka


[ lol, hotmail seems to insist on non-text email 
which the squid list sanely rejected  I guess the
setting is specific to a given machine since I had 
changed it before]

 Subject: Re: [squid-users] Need help with insert code into html body
 
 On Sun, Nov 29, 2009 at 8:28 PM, Mike Marchywka marchy...@hotmail.com wrote:
 

 How does compressin work? I thought someone said you could compress
 results in some squid versions?
 
 The official squid doesn't do that compression, it's maybe other
 organization's release, or with an external module.
 You could modify the source and implement that, go on, nobody will blame you.


Well, adzapper may be a better example then. Presumably the adzapper logic 
could get the target url and modify it. Right now it just replaces it with a 
generated image saying zapped. 

Note: while waiting for firefox on windoze to stop doing vm and echo even 1 of 
my last 50 keystrokes, I retyped this message on a debian machine with no 
hesitations or swearing required to get it to respond. IT looks like the
other machine finally came back but it doesn't matter now. In any case, editing 
may be a bit bad as I transition to something that doesn't need a supercomputer 
to type text. Hotmail won't run on my mom's Dell desktop without hanging every 
few messages either. arrggghh.


  
Windows 7: I wanted simpler, now it's simpler. I'm a rock star. 
  
_
Bing brings you maps, menus, and reviews organized in one place.
http://www.bing.com/search?q=restaurantsform=MFESRPpubl=WLHMTAGcrea=TEXT_MFESRP_Local_MapsMenu_Resturants_1x1

RE: [squid-users] Need help with insert code into html body

2009-11-29 Thread Amos Jeffries
On Sun, 29 Nov 2009 08:14:36 -0500, Mike Marchywka marchy...@hotmail.com
wrote:
 [ lol, hotmail seems to insist on non-text email 
 which the squid list sanely rejected  I guess the
 setting is specific to a given machine since I had 
 changed it before]
 
 Subject: Re: [squid-users] Need help with insert code into html body
 
 On Sun, Nov 29, 2009 at 8:28 PM, Mike Marchywka marchy...@hotmail.com
 wrote:
 

 How does compressin work? I thought someone said you could compress
 results in some squid versions?
 
 The official squid doesn't do that compression, it's maybe other
 organization's release, or with an external module.
 You could modify the source and implement that, go on, nobody will
blame
 you.

gzip/deflate is a third-party eCAP plugin.
chunked is a transmission-level encryption of the content, not an
alteration.

 * the key thing with both is that the visitor gets what they asked for in
its original form.

ESI requires website administrator alterations.

 * the key thing here is that the user gets what they asked for as the
website operator designed it to be received.

It's not that we can't do content alteration with Squid it's that we
developers WON'T add it to the main code without very good reasons and
clear boundaries.  It's a very popular for people to think they'll just
'fix' something by changing the content received.

There are a lot of legal, security and neutrality issues involved with
altering third-party copyright information without either the content
creator or the service users explicit consent.

You need to take a close look at your ethics when this idea pops up.

http://www.pcpd.org.hk/misc/pamela_chan/tsld003.htm
Content alteration violates the 3rd, 4th, 5th, 6th, and 7th UN
international consumer rights, to varying degrees.

With the exception of Britain and Australia it's illegal to do this in
Democratic countries. They have the same digital-rights policy as China.
Prohibiting the transmission, viewing or storage of certain unspecified
content of 'unsavory' nature.
It's a regular sight for oppressive national dictatorships of various
forms to do content alteration, but even there its illegal for
non-government personnel to do.

If you have management pressure to do this here are a few prior examples:
http://www.wired.com/threatlevel/2008/05/isp-content-f-1/
http://www.webpronews.com/topnews/2006/02/07/isps-may-face-liability-for-altering-email
http://www.wired.com/threatlevel/2007/12/canadian-isps-p/
http://www.ip-watch.org/weblog/2009/02/09/isp-liability-limitations-and-exceptions-top-global-copyright-issues-in-2009/
http://www.wired.com/threatlevel/2007/11/comcast-sued-ov/
http://www.macworld.com/article/132075/2008/02/netneutrality1.html
http://www.techcrunch.com/2007/06/23/real-evil-isp-inserted-advertising/

And the things you avoid by not adding your own content to the pages:
http://www.out-law.com/page-753

 
 Well, adzapper may be a better example then. Presumably the adzapper
logic
 could get the target url and modify it. Right now it just replaces it
with
 a generated image saying zapped.

Yes. AdZapper does a simple access denied when retrieving the adverts.
Providing its own custom error page which the browser displays instead of
whatever add content.

Squid does this easily with ACL and the deny_info directive.


Amos



Re: [squid-users] Need help in integrating squid and samba

2009-09-09 Thread Avinash Rao
On Tue, Sep 8, 2009 at 2:49 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 Avinash Rao wrote:

 On Tue, Sep 8, 2009 at 12:19 PM, Amos Jeffriessqu...@treenet.co.nz wrote:

 Avinash Rao wrote:

 On Tue, Sep 8, 2009 at 11:38 AM, Amos Jeffriessqu...@treenet.co.nz
 wrote:

 Avinash Rao wrote:

 -- Forwarded message --
 From: Avinash Rao avinash@gmail.com
 Date: Tue, Sep 8, 2009 at 11:13 AM
 Subject: Re: Fwd: [squid-users] Need help in integrating squid and samba
 To: Amos Jeffries squ...@treenet.co.nz
 Cc: Henrik Nordstrom hen...@henriknordstrom.net,
 squid-users@squid-cache.org




 On Tue, Sep 1, 2009 at 4:10 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 Avinash Rao wrote:

 On 8/31/09, Amos Jeffries squ...@treenet.co.nz wrote:

 Avinash Rao wrote:

 On Mon, Aug 24, 2009 at 1:00 AM, Henrik Nordstrom

 hen...@henriknordstrom.net
 mailto:hen...@henriknordstrom.net wrote:

  sön 2009-08-23 klockan 15:08 +0530 skrev Avinash Rao:
   I couldn't find any document that shows me how to enable wb_info
  for squid.
   Can anybody help me?

  external_acl_type NT_Group %LOGIN
  /usr/local/squid/libexec/wbinfo_group.pl

  acl group1 external NT_Group group1


  then use group1 whenever you want to match users belonging to that
  Windows group.

  Regards
  Henrik


 Hi Henrik,

 I have used the following in my squid.conf

 external_acl_type NT_Group %LOGIN /usr/lib/squid/wbinfo_group.pl acl

 group1 external NT_Group staff

 acl net time M T W T F S S 9:00-18:00
 http_access allow net

 On my linux server, I have created a group called staff and made a
 couple

 of users a member of this group called staff. My intention is to
 provide
 access to users belonging to group staff on all days from morning 9am
 -
 7PM.
 The rest should be denied.

 But this didn't work, when the Samba users login from a winxp
 client,
 it

 doesn't get access to internet at all.
 There is no http_access lien making any use of ACL group1

 And _everybody_ (me included on this side of the Internet) is allowed
 to use
 your proxy between 9am ad 6pm.


 Amos

 Thanks for the reply, Ya i missed http_access allow group1
 I didn't understand your second statement, are u telling me that i
 should deny access to net?

 You should combine the ACL with others on an http_access line so that
 its
 limited to who it allows.

 This:
  acl net time M T W T F S S 9:00-18:00
  http_access allow net

 simply says all requests are allowed between time X and Y.
 Without additional controls, ie on IP address making the request,  you
 end up with an open proxy.

 Amos

 Dear Amos,

 I am still not able to get this working.  Here's what i want to
 accomplish. I have WinXP - SP2 clients logging onto the samba domain
 and LTSP users. All users use squid proxy. My intention is to control
 the samba users from accessing the internet at certain times.

 If i don't use the external_acl_type NT_Group as mentioned below, the
 squid works properly for all users, even windows and anybody using
 squid proxy.

 external_acl_type NT_Group %LOGIN /usr/local/squid/libexec/
 wbinfo_group.pl
 acl group1 external NT_Group group1
 I have created a group called staff using net rpc command and i am i
 have made all the users using winxp a member of this group staff. So,
 my acl will look like

 external_acl_type NT_Group %LOGIN
 /usr/local/squid/libexec/wbinfo_group.pl
 acl acl_name external NT_Group staff
 http_access allow staff

 According to my understanding, it should allow only those samba users
 which come under the group staff. But thats not happening, squid
 denies access to the internet.

 _when tested_ it should be doing that. Other rules around it have an
 effect
 that you may have overlooked.

 Then again the group name is case-sensitive. The helper is OS access
 permission sensitive, and NTLM auth has difficulties all of its own.


 I'll need to see the whole access config to know whats going on. And
 remind
 me what version of Squid this is.


 Amos

 hi,


 r...@sunbox:/etc/squid# dpkg -l | grep squid
 ii  squid                                 2.6.18-1ubuntu3
                       Internet object cache (WWW proxy cache)
 ii  squid-common                          2.6.18-1ubuntu3
                       Internet object cache (WWW proxy cache) - co

 squid.conf

 visible_hostname sunbox
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY

 use:  cache deny QUERY

 hosts_file /etc/hosts
 http_port 10.10.10.200:3128
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320

 external_acl_type NT_Group %LOGIN /usr/local/squid/libexec/wbinfo_group.pl
 acl staffgroup external NT_Group staff

 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443 563
 acl Safe_ports port 80                # http
 acl Safe_ports port 21                # ftp
 acl Safe_ports port 443 563 

Re: [squid-users] Need help in integrating squid and samba

2009-09-09 Thread Henrik Nordstrom
ons 2009-09-09 klockan 12:02 +0530 skrev Avinash Rao:

 http_access allow staffgroup
 http_access allow student staffgroup

The above is wrong.

The first directive allows everyone in staffgroup without restriction,
which means the second can not be reached. Squid uses the first
http_access line matching the request to determine if the request is
allowed or denied, any http_access rules following that is ignored.

 I am wondering if its really checking the NT group? I also tried using
 the squid_unix_group option, but the result was the same.

It most likely is, assuming you have no proxy_auth REQUIRED acl used
in parts of squid.conf not shown here.

 http_access deny extndeny
 http_access deny purge
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 
 
 #http_access allow friends WORKING
 #http_access deny friends
 http_access deny abc
 http_access deny videos
 
 http_access deny !AuthUsers

Ok.

 http_access allow staffgroup
 http_access allow student staffgroup

See above for why this is wrong. I guess the first of the two should
go..


 http_access allow manager localhost
 http_access deny manager
 http_access allow purge localhost

There is a deny purge rule missing here.

And the whole block should be before your custom rules (i.e. first rules
in http_access).

 #http_access allow special_urls
 #http_access deny extndeny download
 http_access deny badurl
 #http_access deny malware_block_list
 #deny_info http://malware.hiperlinks.com.br/denied.shtml malware_block_list

This deny need to go before where you allow access to be effective. But
maybe it is.. Not entirely obvious to me who should get denied and who
not.

 http_access allow localhost
 http_access allow lan
 http_access deny all

Ok.

Regards
Henrik





Re: [squid-users] Need help in integrating squid and samba

2009-09-09 Thread Avinash Rao
On Wed, Sep 9, 2009 at 12:56 PM, Henrik Nordstrom
hen...@henriknordstrom.net wrote:
 ons 2009-09-09 klockan 12:02 +0530 skrev Avinash Rao:

 http_access allow staffgroup
 http_access allow student staffgroup

 The above is wrong.

 The first directive allows everyone in staffgroup without restriction,
 which means the second can not be reached. Squid uses the first
 http_access line matching the request to determine if the request is
 allowed or denied, any http_access rules following that is ignored.

 I am wondering if its really checking the NT group? I also tried using
 the squid_unix_group option, but the result was the same.

 It most likely is, assuming you have no proxy_auth REQUIRED acl used
 in parts of squid.conf not shown here.

 http_access deny extndeny
 http_access deny purge
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports


 #http_access allow friends WORKING
 #http_access deny friends
 http_access deny abc
 http_access deny videos

 http_access deny !AuthUsers

 Ok.

 http_access allow staffgroup
 http_access allow student staffgroup

 See above for why this is wrong. I guess the first of the two should
 go..


 http_access allow manager localhost
 http_access deny manager
 http_access allow purge localhost

 There is a deny purge rule missing here.

 And the whole block should be before your custom rules (i.e. first rules
 in http_access).

 #http_access allow special_urls
 #http_access deny extndeny download
 http_access deny badurl
 #http_access deny malware_block_list
 #deny_info http://malware.hiperlinks.com.br/denied.shtml malware_block_list

 This deny need to go before where you allow access to be effective. But
 maybe it is.. Not entirely obvious to me who should get denied and who
 not.

 http_access allow localhost
 http_access allow lan
 http_access deny all

 Ok.

 Regards
 Henrik




Henrik,

I understood what you said, I removed the conflicting entry,
http_access allow staffgroup and yes my config has:

acl AuthUsers proxy_auth REQUIRED
http_access deny !AuthUsers

But the result was the same. The time restriction is not working.

Regards,
Avinash


Re: [squid-users] Need help in integrating squid and samba

2009-09-08 Thread Amos Jeffries

Avinash Rao wrote:

-- Forwarded message --
From: Avinash Rao avinash@gmail.com
Date: Tue, Sep 8, 2009 at 11:13 AM
Subject: Re: Fwd: [squid-users] Need help in integrating squid and samba
To: Amos Jeffries squ...@treenet.co.nz
Cc: Henrik Nordstrom hen...@henriknordstrom.net, squid-users@squid-cache.org




On Tue, Sep 1, 2009 at 4:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:

Avinash Rao wrote:

On 8/31/09, Amos Jeffries squ...@treenet.co.nz wrote:

Avinash Rao wrote:


On Mon, Aug 24, 2009 at 1:00 AM, Henrik Nordstrom

hen...@henriknordstrom.net
mailto:hen...@henriknordstrom.net wrote:

  sön 2009-08-23 klockan 15:08 +0530 skrev Avinash Rao:
I couldn't find any document that shows me how to enable wb_info
  for squid.
Can anybody help me?

  external_acl_type NT_Group %LOGIN
  /usr/local/squid/libexec/wbinfo_group.pl

  acl group1 external NT_Group group1


  then use group1 whenever you want to match users belonging to that
  Windows group.

  Regards
  Henrik


Hi Henrik,

I have used the following in my squid.conf

external_acl_type NT_Group %LOGIN /usr/lib/squid/wbinfo_group.pl acl

group1 external NT_Group staff

acl net time M T W T F S S 9:00-18:00
http_access allow net

On my linux server, I have created a group called staff and made a couple

of users a member of this group called staff. My intention is to provide
access to users belonging to group staff on all days from morning 9am - 7PM.
The rest should be denied.

But this didn't work, when the Samba users login from a winxp client, it

doesn't get access to internet at all.
There is no http_access lien making any use of ACL group1

And _everybody_ (me included on this side of the Internet) is allowed to use
your proxy between 9am ad 6pm.


Amos


Thanks for the reply, Ya i missed http_access allow group1
I didn't understand your second statement, are u telling me that i
should deny access to net?

You should combine the ACL with others on an http_access line so that its 
limited to who it allows.

This:
 acl net time M T W T F S S 9:00-18:00
 http_access allow net

simply says all requests are allowed between time X and Y.
Without additional controls, ie on IP address making the request,  you end up 
with an open proxy.

Amos




Dear Amos,

I am still not able to get this working.  Here's what i want to
accomplish. I have WinXP - SP2 clients logging onto the samba domain
and LTSP users. All users use squid proxy. My intention is to control
the samba users from accessing the internet at certain times.

If i don't use the external_acl_type NT_Group as mentioned below, the
squid works properly for all users, even windows and anybody using
squid proxy.

external_acl_type NT_Group %LOGIN /usr/local/squid/libexec/
wbinfo_group.pl
acl group1 external NT_Group group1
I have created a group called staff using net rpc command and i am i
have made all the users using winxp a member of this group staff. So,
my acl will look like

external_acl_type NT_Group %LOGIN /usr/local/squid/libexec/wbinfo_group.pl
acl acl_name external NT_Group staff
http_access allow staff

According to my understanding, it should allow only those samba users
which come under the group staff. But thats not happening, squid
denies access to the internet.


_when tested_ it should be doing that. Other rules around it have an 
effect that you may have overlooked.


Then again the group name is case-sensitive. The helper is OS access 
permission sensitive, and NTLM auth has difficulties all of its own.



I'll need to see the whole access config to know whats going on. And 
remind me what version of Squid this is.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE18
  Current Beta Squid 3.1.0.13


  1   2   3   >