Re: [squid-users] problem accessing sharepoint

2012-05-22 Thread Paolo Supino
Hi

 In my case I can't bypass the proxies and thus it's not a solution I
can implement.
Please help me solve this problem in other ways.






TIA
Paolo


On Tue, May 22, 2012 at 6:36 AM, Nishant Sharma codemarau...@gmail.com wrote:
 Hi,

 Even we by-pass proxy for access to sharepoint.

 It's easier to do with PAC or WPAD file to avoid making changes on each of
 the desktop.

 Regards,
 Nishant

 On 22 May 2012 06:45, Usuário do Sistema maico...@ig.com.br wrote:

 Hi, I'm with the same problem! and I bypass the proxy for that sharepoint
 URL.


 any tip about how to figure out is welcome


 thanks


 2012/5/21 Paolo Supino paolo.sup...@gmail.com:
  Hi
 
  I was approached by a user that has problems accessing a sharepoint
  share external to our company and I'm lost in finding the cause of the
  failure and a fix for it...
 
  The remote sharepoint site (running sharepoint 14 on IIS 7.5) is
  accessed via a battery of Squid proxies (2.6.STABLE21, RHEL 5.5) that
  authenticate to the company's windows 2003 domain via kerberos and an
  external helper that checks group membership. When trying to access
  the remote sharepoint site via the URL:
  http://www.example.com/sites/share-name it repeatedly prompts the user
  with username/password (the sharepoint site uses NTLM authentication).
  Running TCP dump on the proxy through which the request is being
  forwarded I noticed that the sharepoint site rejects the
  username/password pair and sends back HTTP/1.1 401 Unauthorized.
 
  Authentication isn't rejected completely when using Internet Explorer
  6 and explicity asking for default.aspx ASP page by entering the URL:
  http://www.example.com/sites/share-name/default.aspx, but some elemnts
  in the page aren't loaded causing it to be impossible to work with the
  files in the share.
 
 
  I apologize for the lack of information (again, I'm lost). Anyone can
  try and help me solve the problem (if it is solvable)?
 
 
 
  TIA
  Paolo


Re: [squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-05-22 Thread Nishant Sharma
Hi Amos,

Thanks for your detailed response.

On Tue, May 22, 2012 at 4:56 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
 acl loggedin external hosted_auth
 deny_info https://hostedserver/auth.html loggedin
 http_access deny !loggedin
 http_access allow all

 Please be ware there is no authentication in this setup, despite the login
 on your portal page.
 What you have is session-based *authorization*.
 It is a razor-thin line, but critical to be aware of. Since NAT erases and
 plays with the %SRC key which you are using to identify clients. 1) NAT
 hides unwanted visitors on the POP networks. 2) The XFF workaround to undo
 the NAT is header based with risks of header forgery. So NAT introduces
 multiple edge cases where attacks can leak through and hijack sessions.

I understand the difference between Authentication and Authorization,
but here the prime motive is to enforce user based access rules and
perform AuthN / AuthZ over a secured channel against IMAP.

If we segregate the zones as Trusted and Non-Trusted where the
trusted zone is our HO and a proxy forwards the requests to our
publicly hosted squid with XFF header while Non-Trusted zones are
our spokes and roadwarrior users who are behind a simple NAT. Trusted
zone users are allowed to access the proxy with just authorization
(session / form based) and Non-Trusted zone users need to authenticate
compulsorily (explicit proxy-auth). This way, we could enforce the
policies based on users instead of IPs.

Again, the problem is the secured authentication against IMAPS. Mail
is hosted on google and we can't use DIGEST that we receive from
browsers. BASIC auth is ruled out again due to security reasons. VPN /
Stunnel is not considered due to user credential / machine management.

  While the HTML file displays a login
 form over HTTPS and sends request to a CGI script which authenticates
 against IMAPS and populates the DB with session information. I
 understand that I can not use cookies for authentication as browser
 will not include cookie set by our authentication page for request to
 other domains.

 Correct.

On some more googling, I found something called Surrogate Cookies here:
https://kb.bluecoat.com/index?page=contentid=KB3407
https://kb.bluecoat.com/index?page=contentid=KB2877

From what I could understand is their primary usage is with the
reverse proxy in front of the webservers with limited domains behind
them but it is being used for surrogate authentication with normal
proxy deployments by forcing proxies to accept cookies for any domain?
Even the commercial proxies advise against using surrogate credentials
wherever possible. The major disadvantage I can see is they can't be
used with wget, lynx, elinks, java applets etc. which expect usual
proxy authentication.

 bit lacking in how to merge the format %SRC %{X-Forwarded-For} into one
 UUID token. There is the space between the two tokens and XFF header is
 likely to contain spaces internally which the script as published can;t
 handle.
 HINT: If anyone has a fix for that *please* let me know. I know its
 possible, I stumbled on a perl trick ages back that would do it then lost
 the script that was in :(

Following snippet should help if you just want to strip spaces in the
$token string:

my $token = %SRC %{X-Forwarded-For};
$token =~ s/\ //; # This should remove only the first space
$token =~ s/\ //g; # This removes all the spaces in the string

If you could send in sample strings - received and final expected
result, I can help with hacking Perl code.

I have also written an auth helper based on the existing POP3 auth
helper. It authenticates against IMAP and IMAPS depending on the
arguments provided e.g.:

## IMAPS against google but return ERR if user tries to authenticate
with @gmail.com
imap_auth imaps://imap.google.com mygooglehostedmail.com

## IMAP auth against my own IMAP server
imap_auth imap://imap.mydomain.com mydomain.com

Where should I submit that as contribution to Squid?

 Having edge proxies in the POP also enables you to setup a workaround for
 NAT which XFF was designed for
 * The edge proxies add client (pre-NAT) IP address to XFF header, and
 forward to the central proxy.
 * The central proxy only trusts traffic from the edge proxies (eliminating
 WAN attacks).
 * The central proxy trusts *only* the edge proxies in an ACL used by
 follow_x_forwarded_for allow directive. Doing so alters Squid %SRC parameter
 to be the client the POP edge proxy received.
 This setup also allows you to encrypt the TCP links between POP edge proxies
 and central if you want, or to bypass the central proxy for specific
 requests if you need to, and/or to offload some of the access control to
 site-specific controls into the POP edge proxies.

Thanks for the detailed setup guidance. I have actually already put
the proxy in place as you have suggested and follow_x_forwarded_for is
working great as expected for the HO 

Re: [squid-users] problem accessing sharepoint

2012-05-22 Thread Nishant Sharma
Hi Paolo,

Is their any AV filtering happening with HAVP as parent to Squid? You
could configure something like this and see if it works:

pipeline_prefetch on;
acl sharepoint dst SHAREPOINT_IP or acl sharepoint dstdomain SHAREPOINT_DOMAIN
always_direct allow sharepoint

Moreover, sharepoint doesn't work very well on non-IE browsers.

regards,
Nishant

On Tue, May 22, 2012 at 11:54 AM, Paolo Supino paolo.sup...@gmail.com wrote:
 Hi

  In my case I can't bypass the proxies and thus it's not a solution I
 can implement.
 Please help me solve this problem in other ways.






 TIA
 Paolo


 On Tue, May 22, 2012 at 6:36 AM, Nishant Sharma codemarau...@gmail.com 
 wrote:
 Hi,

 Even we by-pass proxy for access to sharepoint.

 It's easier to do with PAC or WPAD file to avoid making changes on each of
 the desktop.

 Regards,
 Nishant

 On 22 May 2012 06:45, Usuário do Sistema maico...@ig.com.br wrote:

 Hi, I'm with the same problem! and I bypass the proxy for that sharepoint
 URL.


 any tip about how to figure out is welcome


 thanks


 2012/5/21 Paolo Supino paolo.sup...@gmail.com:
  Hi
 
  I was approached by a user that has problems accessing a sharepoint
  share external to our company and I'm lost in finding the cause of the
  failure and a fix for it...
 
  The remote sharepoint site (running sharepoint 14 on IIS 7.5) is
  accessed via a battery of Squid proxies (2.6.STABLE21, RHEL 5.5) that
  authenticate to the company's windows 2003 domain via kerberos and an
  external helper that checks group membership. When trying to access
  the remote sharepoint site via the URL:
  http://www.example.com/sites/share-name it repeatedly prompts the user
  with username/password (the sharepoint site uses NTLM authentication).
  Running TCP dump on the proxy through which the request is being
  forwarded I noticed that the sharepoint site rejects the
  username/password pair and sends back HTTP/1.1 401 Unauthorized.
 
  Authentication isn't rejected completely when using Internet Explorer
  6 and explicity asking for default.aspx ASP page by entering the URL:
  http://www.example.com/sites/share-name/default.aspx, but some elemnts
  in the page aren't loaded causing it to be impossible to work with the
  files in the share.
 
 
  I apologize for the lack of information (again, I'm lost). Anyone can
  try and help me solve the problem (if it is solvable)?
 
 
 
  TIA
  Paolo


Re: [squid-users] FTP option ftp_epsv

2012-05-22 Thread Matus UHLAR - fantomas

I have configured browser (http and FTP option) to use squid proxy.
some FTP sites are opening with the help of browser but not ftp://ftp.uar.net/;

If i use ftp_epsv off then it works fine.
I am using squid-3.1.9.

What was the exact problem?
Please tell me consequences of ftp_epsv off.
Does it will affect on some other settings?


On 15.05.12 04:27, Nil Nik wrote:

Please reply i need help on this.


EPSV is new, IPv6 compatible version of PASV, aka FTP passive mode.

If an intermediate firewall does not understand that, you can fail 
connecting to FTP server.


In fact, it should be up to FTP server's admin to disable the EPSV 
mode on EPSV-capable FTP server behind the firewall without EPSV 
support.


You should only care of EPSV when you are behind such firewall, but you 
will apparently need it even for cases your customers use your squid 
for accessing such servers.


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
My mind is like a steel trap - rusty and illegal in 37 states. 


[squid-users] How to reload a request via cache in squid if URL is blocked?

2012-05-22 Thread Mehdi Sadeqi
Hi squid users!

I have a simple and pretty basic squid3 configurations in my home
server. I use it in combination with Tor and Privoxy. I have defined
privoxy as cache_peer in squid configuration and an access list
containing URLs this way:

     acl censored    dstdomain   /home/me/censored.acl
     never_direct    allow   censored
     cache_peer localhost    parent    8118 0  no-query no-digest 
 name=privoxy
     cache_peer_access   privoxy allow   censored

Gradually I add new URLs to the censored.acl file to be loaded via
proxy and it works. What I need is to make the process dynamic. For
every request that goes to a blocked URL I have two lines of squid
log:


    1337354630.541    716 127.0.0.1 TCP_MISS/403 521 GET 
http://bbc.co.uk/persian - DIRECT/212.58.241.131 -
    1337354630.614 24 127.0.0.1 TCP_HIT/000 0 GET http://10.10.34.34/? - 
 DIRECT/10.10.34.34 -


The second line is always the same. Is there anyway that I can reload
a request in squid according the the response? Or if the request
followed by or redirected to another URL? Because on censored URLs
alwasy TCP_MISS/403 happesn.


Re: [squid-users] problem with upload

2012-05-22 Thread Mustafa Raji
sorry for forgetting to mention squid version it's 3.1.11
the POST access.logs (for sample making a grep for only POST requests) are
1337667519.252822 192.168.12.100 TCP_MISS/200 2122 POST 
http://ocsp.verisign.com/ - DIRECT/199.7.57.72 application/ocsp-response
1337667534.962505 192.168.12.100 TCP_MISS/200 1078 POST 
http://ocsp.usertrust.com/ - DIRECT/178.255.83.1 application/ocsp-response
1337667536.532   1440 192.168.12.100 TCP_MISS/200 2331 POST 
http://ocsp.entrust.net/ - DIRECT/216.191.247.139 application/ocsp-response
1337670608.843996 192.168.12.100 TCP_MISS/200 6683 POST 
http://us.mc1256.mail.yahoo.com/mc/compose? - DIRECT/66.196.66.156 text/html
1337670695.523675 192.168.12.100 TCP_MISS/200 982 POST 
http://ocsp.digicert.com/ - DIRECT/69.36.162.242 application/ocsp-response
1337670696.642597 192.168.12.100 TCP_MISS/200 982 POST 
http://ocsp.digicert.com/ - DIRECT/69.36.162.242 application/ocsp-response
1337670696.915556 192.168.12.100 TCP_MISS/200 982 POST 
http://ocsp.digicert.com/ - DIRECT/69.36.162.242 application/ocsp-response
1337670809.875460 192.168.12.100 TCP_MISS/200 995 POST 
http://arabia.msn.com/GeneralMethod.aspx/GetWeather - DIRECT/41.178.51.12 
application/json
1337670817.995782 192.168.12.100 TCP_MISS/200 2164 POST 
http://ocsp.verisign.com/ - DIRECT/199.7.52.72 application/ocsp-response
1337670818.160955 192.168.12.100 TCP_MISS/200 2164 POST 
http://ocsp.verisign.com/ - DIRECT/199.7.52.72 application/ocsp-response
1337670825.073655 192.168.12.100 TCP_MISS/200 982 POST 
http://ocsp.digicert.com/ - DIRECT/69.36.162.242 application/ocsp-response
1337670828.705   3573 192.168.12.100 TCP_MISS/200 982 POST 
http://ocsp.digicert.com/ - DIRECT/69.36.162.242 application/ocsp-response
1337670830.291   1028 192.168.12.100 TCP_MISS/200 2295 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670830.291830 192.168.12.100 TCP_MISS/200 2295 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670830.901493 192.168.12.100 TCP_MISS/200 2295 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670830.925484 192.168.12.100 TCP_MISS/200 2295 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670831.044479 192.168.12.100 TCP_MISS/200 2434 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670831.538485 192.168.12.100 TCP_MISS/200 2434 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670831.568484 192.168.12.100 TCP_MISS/200 2434 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670831.649483 192.168.12.100 TCP_MISS/200 2434 POST 
http://evsecure-ocsp.verisign.com/ - DIRECT/199.7.52.72 
application/ocsp-response
1337670884.703525 192.168.12.100 TCP_MISS/200 788 POST 
http://www.4shared.com/javascriptRedirect.jsp - DIRECT/74.117.178.89 text/html
1337670930.970636 192.168.12.100 TCP_MISS/200 581 POST 
http://www.4shared.com/rest/sharedFileUpload/create? - DIRECT/74.117.178.89 
application/json
1337671057.835 124619 192.168.12.100 TCP_MISS/000 0 POST 
http://dc588.4shared.com/main/upload5.jsp? - DIRECT/204.155.149.57 -
1337671097.683 468715 192.168.12.100 TCP_MISS/502 1470 POST 
http://ne1.attach.mail.yahoo.com/us.f1256.mail.yahoo.com/ya/upload? - 
DIRECT/98.138.79.63 text/html
1337671201.785745 192.168.12.100 TCP_MISS/200 367 POST 
http://www.4shared.com/rest/sharedFileUpload/error - DIRECT/74.117.178.89 
application/json
1337671368.951 166333 192.168.12.100 TCP_MISS/000 0 POST 
http://dc588.4shared.com/main/upload5.jsp? - DIRECT/204.155.149.57 -
1337671420.455656 192.168.12.100 TCP_MISS/200 1032 POST 
http://stats.avg.com/services/toolbar_updater.aspx - DIRECT/23.45.247.117 
text/xml
1337671435.492367 192.168.12.100 TCP_MISS/302 584 POST 
http://stats.avg.com/Services/ssf.asmx/GetFile - DIRECT/23.45.247.117 -

please if the rule 
  acl my_network src 192.168.12.0/24
  http_access allow my_network
is removed how can i allow this ip to enter the squid cacheserver.
i read that /000 mean the connection is aborted, the connection is aborted when 
the upload is down, 
i will add the http_access deny all
thanks with my best regards 

--- On Mon, 5/21/12, Amos Jeffries squ...@treenet.co.nz wrote:

 From: Amos Jeffries squ...@treenet.co.nz
 Subject: Re: [squid-users] problem with upload
 To: squid-users@squid-cache.org
 Date: Monday, May 21, 2012, 11:57 PM
 On 22.05.2012 06:18, Mustafa Raji
 wrote:
  hi
  i have squid cache server configured in the intercept
 mode. i have a
  problem when i upload to websites, some time i can
 upload normally and
  other time when i upload a file to the internet the
 uploading process
  does not complete and the upload reduced to 0 kB
  please can any one help me. is there any way that squid
 effects 

Re: [squid-users] problem accessing sharepoint

2012-05-22 Thread Paolo Supino
Hi Nishant

  Yes we do have upstream proxies: Finjan security scanner. I Tried to
bypass them with always_direct, but it didn't work...



TIA
Paolo




On Tue, May 22, 2012 at 8:41 AM, Nishant Sharma codemarau...@gmail.com wrote:
 Hi Paolo,

 Is their any AV filtering happening with HAVP as parent to Squid? You
 could configure something like this and see if it works:

 pipeline_prefetch on;
 acl sharepoint dst SHAREPOINT_IP or acl sharepoint dstdomain SHAREPOINT_DOMAIN
 always_direct allow sharepoint

 Moreover, sharepoint doesn't work very well on non-IE browsers.

 regards,
 Nishant

 On Tue, May 22, 2012 at 11:54 AM, Paolo Supino paolo.sup...@gmail.com wrote:
 Hi

  In my case I can't bypass the proxies and thus it's not a solution I
 can implement.
 Please help me solve this problem in other ways.






 TIA
 Paolo


 On Tue, May 22, 2012 at 6:36 AM, Nishant Sharma codemarau...@gmail.com 
 wrote:
 Hi,

 Even we by-pass proxy for access to sharepoint.

 It's easier to do with PAC or WPAD file to avoid making changes on each of
 the desktop.

 Regards,
 Nishant

 On 22 May 2012 06:45, Usuário do Sistema maico...@ig.com.br wrote:

 Hi, I'm with the same problem! and I bypass the proxy for that sharepoint
 URL.


 any tip about how to figure out is welcome


 thanks


 2012/5/21 Paolo Supino paolo.sup...@gmail.com:
  Hi
 
  I was approached by a user that has problems accessing a sharepoint
  share external to our company and I'm lost in finding the cause of the
  failure and a fix for it...
 
  The remote sharepoint site (running sharepoint 14 on IIS 7.5) is
  accessed via a battery of Squid proxies (2.6.STABLE21, RHEL 5.5) that
  authenticate to the company's windows 2003 domain via kerberos and an
  external helper that checks group membership. When trying to access
  the remote sharepoint site via the URL:
  http://www.example.com/sites/share-name it repeatedly prompts the user
  with username/password (the sharepoint site uses NTLM authentication).
  Running TCP dump on the proxy through which the request is being
  forwarded I noticed that the sharepoint site rejects the
  username/password pair and sends back HTTP/1.1 401 Unauthorized.
 
  Authentication isn't rejected completely when using Internet Explorer
  6 and explicity asking for default.aspx ASP page by entering the URL:
  http://www.example.com/sites/share-name/default.aspx, but some elemnts
  in the page aren't loaded causing it to be impossible to work with the
  files in the share.
 
 
  I apologize for the lack of information (again, I'm lost). Anyone can
  try and help me solve the problem (if it is solvable)?
 
 
 
  TIA
  Paolo


Re: [squid-users] ICAP respmod problem

2012-05-22 Thread Nobuhiro Nikushi
Hi, Amos.

 Please notice: Proxy-Connection: Keep-Alive was requested by curl.

As you pointed out, curl in my example should not have been used with
keepalive.

Anyway, I hope the patch will be applied to 3.1.x trunk as well as
3.2(fixied). This bug causes regression. Thanks.

Regards.

On Tue, May 22, 2012 at 10:18 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 22.05.2012 02:02, Nobuhiro Nikushi wrote:

 Hi, folks.

 I am using Squid 3.1.19 as a client-side cache server which is
 compiled with --enable-icap-client option.

 I have a problem, Squid can not finish HTTP request from browser under
 specific conditions: as follow.

  - Squid is configured to forward response body to an ICAP respmod server,
    AND
  - When a Web server answers content length equals 1 byte.


 The following is the curl's output.

  $ curl -o /dev/null -v -x 192.168.1.1:8080
 http://radiant-water-7466.herokuapp.com/1
  * About to connect() to proxy 192.168.1.1 port 8080
  *   Trying 192.168.1.1... connected
  * Connected to 192.168.1.1 (192.168.1.1) port 8080
   GET http://radiant-water-7466.herokuapp.com/1 HTTP/1.1
   User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5
 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
   Host: radiant-water-7466.herokuapp.com
   Proxy-Connection: Keep-Alive
  
   HTTP/1.0 200 OK
   Date: Mon, 21 May 2012 13:24:54 GMT
   Content-Type: text/html;charset=utf-8
   Server: thin 1.3.1 codename Triple Espresso
   Content-Length: 1
  * HTTP/1.0 connection set to keep alive!
   Connection: keep-alive
    % Total    % Received % Xferd  Average Speed   Time    Time
 Time  Current
                                   Dload  Upload   Total   Spent
 Left  Speed
    0     1    0     0    0     0      0      0 --:--:--  0:00:20
 --:--:--     0
                                                                    ~
 http://radiant-water-7466.herokuapp.com/1 return single char with
 Content-Length: 1.
 In this case, the connection should be closed by Squid, but Squid was
 keeping connection.


 Please notice: Proxy-Connection: Keep-Alive was requested by curl.

 3.1 series does have a hanging problem with ICAP on 1-byte traffic but it is
 not being demonstrated by the above.



 no problem if content-lengh is more than 1 or if icap repomod is disabled.


 This is http://bugs.squid-cache.org/show_bug.cgi?id=3466.

 You need the latest daily snapshot of 3.2 or 3.HEAD Squid to get a fixed
 version. There is a patch in the bug report you are free to use/test, we are
 just not confident enough about the impact on 3.1 for it to go into stable.

 Amos


Re: [squid-users] R: [squid-users] Fix time

2012-05-22 Thread Matus UHLAR - fantomas

On 18.05.12 09:32, Netmail wrote:

This is the messagge of squid
http://imageshack.us/photo/my-images/41/catturapnp.png/
My timezone is gmt +1 ROME ..in Italy


apparently +2, because you have summer time (another +1). That would 
explain it. squid runs in UTC, that has no timezones and no summer 
time


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Quantum mechanics: The dreams stuff is made of. 


Re: [squid-users] How to reload a request via cache in squid if URL is blocked?

2012-05-22 Thread Eliezer Croitoru

On 22/05/2012 13:09, Mehdi Sadeqi wrote:

Hi squid users!

I have a simple and pretty basic squid3 configurations in my home
server. I use it in combination with Tor and Privoxy. I have defined
privoxy as cache_peer in squid configuration and an access list
containing URLs this way:


 acl censoreddstdomain   /home/me/censored.acl
 never_directallow   censored
 cache_peer localhostparent8118 0  no-query no-digest 
name=privoxy
 cache_peer_access   privoxy allow   censored


Gradually I add new URLs to the censored.acl file to be loaded via
proxy and it works. What I need is to make the process dynamic. For
every request that goes to a blocked URL I have two lines of squid
log:



 1337354630.541716 127.0.0.1 TCP_MISS/403 521 GET 
http://bbc.co.uk/persian - DIRECT/212.58.241.131 -
1337354630.614 24 127.0.0.1 TCP_HIT/000 0 GET http://10.10.34.34/? - 
DIRECT/10.10.34.34 -



The second line is always the same. Is there anyway that I can reload
a request in squid according the the response? Or if the request
followed by or redirected to another URL? Because on censored URLs
alwasy TCP_MISS/403 happesn.
you can redirect in the http server from the blocked url to another url 
or the source\referer url but beware of redirection loops.


Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] Caching a single IP address in Squid - possible?

2012-05-22 Thread iskeels
Hi 

I've set up Squid and now want to configure it so that all http traffic
passes through without caching except for a specific IP address range I
specify.  To be clear, I want Squid to ignore all http traffic and cache
nothing but content I point it at.  Any help on how to achieve this would be
very much appreciated. 

Thanks 

Ian.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Caching-a-single-IP-address-in-Squid-possible-tp4650081.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Basic questions about Squid capabilities

2012-05-22 Thread Eliezer Croitoru

On 20/05/2012 19:47, Jason Voorhees wrote:

Hi people:

I'm a squid user since long time ago but my skills -I believe- aren't
so high to implement some of the feature I'm asking for in this
e-mail.

In a university there are 6000-8000 users (they are divided in a big
campus through different VLANs, offices even metro-ethernet connected
branchs) browsing Internet through two lines of 80 and 70 mbps.
Currently there's a fortinet appliance doing the labor of web
filtering with some interesting feature I'd like to implement with
Squid too. These are the pros and cons about fortinet:

cons

- It doesn't have a cache (at least not an effective one)
- When fortinet implement too much bandwidth rules (something like
squid delay pools) it begins to work slowly and the browsing becomes
slow too.

squid can implement both of them but it depends on the hardware that is 
hosting squid.

basic 4 cores with 8gb ram can basically do the job for you.
the users are not much of measurement size but a requests per second and 
bandwidth throughput together.




pros

- It has a feature to transparently block https websites. The fortinet
admin told me that only for blocked webpages users get a warning of a
incorrect certificate (a fortinet digital certificated) but for
allowed websites users don't get any warning of failing digital
certificates (i don't know if this is true or possible).
- Its web filtering its good, it has a up to date database of
categorized websites to do an easy blocking.

What I plan to do is (or what I'd like to do):

- Put Squid in front of fortinet so this one can use squid's cache. I
read this is possible using WCCP and some other things.
- Squid should work as a replace of fortinet if this one someday
fails. So squid is the backup solution to replace fortinet.

it depends on the outgoing ip address and on interception level.
in basic interception mode you can use fortinet as a cache_peer.




So to achieve this I think I need:

a) Do a good filtering : I was thinking about configure Squid +
SquidGuard with a free database, but I have here a simple and basic
question: When I use a redirector like Squidguard... all Squid ACLs
will definitely stop working? I mean, can I use a redirector and still
use my traditional ACLs (acl, http_access, http_reply_access)? Last
time I used a redirector with Squid I appreciated that all ACLs
weren't even read by Squid so I have this doubt.


a url_rewrite is what you will use and all the acls will work the same way.
you can bypass the url_rewrite with acls... so to speak.



b) Integrate fortinet with WCCP : I rapidly saw a few tutorials of how
to do that but... have you achieve this without problem?

what exactly do you want to achieve by using WCCP? what benefits from that?




c) Do transparent https proxy with squid : I tried to use https_port +
ssl-bump feature of Squid 3.1 and iptables (REDIRECT 443 port to 3128)
without 100% success. I generated my own certificate and that one is
the same users get when trying to view some websites (i.e.
facebook.com) what is OK but it happened that some websites didn't
work as expected: some website loaded OK, some loaded without CSS
stylesheets nor images, and some others never loaded (i got the
redirect loop error in the browser). I wasn't able to build squid
3.2 but I don't know if is necessary to use this version to get this
feature of transparent https proxy working.
to use ssl-bump you use a different port then 3128 and specifically for 
ssl-bump.
there was a bug somewhere that makes a loop like that and i think that 
the cause is redirecting 443 to 3128 instead to ssl-bump port.

try it again and you will see miracles :]




d) Cache performance : Are there any special squid settings that help
me to improve or get the maximum performance of my cache? Is SQuid the
best open source solution to implement a powerful cache for my users?

I hope someone with an extra free time can help with suggestions,
ideas or point me to some articles on Internet about these features.
there are some opensource cache options but squid is the most advanced 
one that i have seen and used.
it's very simple to config compared to many other solutions that exists 
and even compared to a paid ones.
for dynamic content you can add an instance of squid2.7satble9 patched 
to cache also youtube and some other sites that wont be cached due to 
their dynamic links behavior,


if you need some more help dont be afraid to ask.

good luck,
Eliezer




Thanks



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Caching a single IP address in Squid - possible?

2012-05-22 Thread Eliezer Croitoru

On 22/05/2012 18:29, iskeels wrote:

Hi

I've set up Squid and now want to configure it so that all http traffic
passes through without caching except for a specific IP address range I
specify.  To be clear, I want Squid to ignore all http traffic and cache
nothing but content I point it at.  Any help on how to achieve this would be
very much appreciated.

Thanks

Ian.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Caching-a-single-IP-address-in-Squid-possible-tp4650081.html
Sent from the Squid - Users mailing list archive at Nabble.com.


you can use squid acls for that.
you can have a look at the next directives in the link:
cache
always_direct
never_direct

link:
http://www.squid-cache.org/Doc/config/

you can specify acl that match a client\src ip for cache.

Eliezer



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] Integrated Windows Authentication through Squid proxy working

2012-05-22 Thread Javier Conti
Hi list,

Since I've been fighting for some time in order to make IWA work through my
Squid proxy (using Windows 7 or 2008 as clients), I just wanted to let people
know that HTTP/1.1 and persistent connections are absolutely necessary
and, since I installed Squid 3.2, IWA worked without problems.

Thanks to everybody for ideas and support, Javier


RE: [squid-users] Transparent interception MTU issues

2012-05-22 Thread Daniel Niasoff
Hi Amos,

Seems like I was mistaken. It looked and felt like MTU issues but disappeared 
when I compiled squid 3.2 from the latest sources.

I am wondering if it's related to this bug 

http://bugs.squid-cache.org/show_bug.cgi?id=3528

Thanks

Daniel

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: 16 May 2012 03:36
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Transparent interception MTU issues

On 16.05.2012 09:53, Daniel Niasoff wrote:
 Hi,

 I am accessing squid through a PPTP tunnel and have a lower MTU as a 
 result.

 I am able to use squid ok as an explicit proxy however when trying 
 transparent interception many pages timeout and don't open.

 I guess this is because of MTU issues.

Likely. But Please check your guesses before looking for a fix to them.

   ping -s 1499 ...

PMTU response or lost packet?


 I have tried http_port 3129 intercept disable-pmtu-discovery=always
 but to no avail.

 I am using 3.2.0.17.

 Any ideas?

If it actually is MTU issues, fix them.

  * Enable ICMP control messages to cross the network.
  * set MTU and/or MSS on the tunnel entrance to an appropriate low value.

Amos



[squid-users] Error to test connectivity to internal MS Exchange server

2012-05-22 Thread Ruiyuan Jiang
Hi, all

I am trying to setup MS webmail over rpc Exchange server access through squid 
(squid 3.1.19, SPARC, Solaris 10) from internet. Here is my pilot squid 
configuration (squid.conf):

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer 10.150.2.15 parent 443 0 no-query originserver login=PASS ssl 
sslcert=/opt/squid-3.1.19/ssl.crt/webmail_katespade_com.crt 
sslkey=/opt/squid-3.1.19/ssl.crt/webmail_katespade_com.key 
sslcafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt name=exchangeServer

cache_peer_access exchangeServer allow all

http_access allow all

miss_access allow all

From the access log of squid:

1337723055.845  7 207.46.14.63 TCP_MISS/503 3905 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll - 
FIRST_UP_PARENT/exchangeServer text/html
1337723055.934  5 207.46.14.63 TCP_MISS/503 3932 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll - 
FIRST_UP_PARENT/exchangeServer text/html


From the cache.log of the squid:

2012/05/22 17:33:28| Starting Squid Cache version 3.1.19 for 
sparc-sun-solaris2.10...
2012/05/22 17:33:28| Process ID 7071
2012/05/22 17:33:28| With 256 file descriptors available
2012/05/22 17:33:28| Initializing IP Cache...
2012/05/22 17:33:28| DNS Socket created at [::], FD 8
2012/05/22 17:33:28| DNS Socket created at 0.0.0.0, FD 9
2012/05/22 17:33:28| Adding domain fifthandpacific.com from /etc/resolv.conf
2012/05/22 17:33:28| Adding nameserver 12.127.17.71 from /etc/resolv.conf
2012/05/22 17:33:28| Adding nameserver 12.127.16.67 from /etc/resolv.conf
2012/05/22 17:33:28| Adding nameserver 156.146.2.190 from /etc/resolv.conf
2012/05/22 17:33:28| Unlinkd pipe opened on FD 14
2012/05/22 17:33:28| Store logging disabled
2012/05/22 17:33:28| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2012/05/22 17:33:28| Target number of buckets: 1008
2012/05/22 17:33:28| Using 8192 Store buckets
2012/05/22 17:33:28| Max Mem  size: 262144 KB
2012/05/22 17:33:28| Max Swap size: 0 KB
2012/05/22 17:33:28| Using Least Load store dir selection
2012/05/22 17:33:28| Current Directory is /opt/squid-3.1.19/var/logs
2012/05/22 17:33:28| Loaded Icons.
2012/05/22 17:33:28| Accepting HTTPS connections at 156.146.2.196:443, FD 15.
2012/05/22 17:33:28| HTCP Disabled.
2012/05/22 17:33:28| Configuring Parent 10.150.2.15/443/0
2012/05/22 17:33:28| Squid plugin modules loaded: 0
2012/05/22 17:33:28| Ready to serve requests.
2012/05/22 17:33:29| storeLateRelease: released 0 objects
-BEGIN SSL SESSION PARAMETERS-
MIGNAgEBAgIDAQQCAC8EIAj2TdmdLmNKL8/+V0D37suIYsli5OZLvCZu6u1+voNA
BDAy5uGQ23i/G+ozoVu/RDjm8yMq3zAJAWiXKz+U537Fd5uMDJeCmo30/cy9WPeF
6fmhBgIET7wIr6IEAgIBLKQCBACmGgQYd2VibWFpbC5qdWljeWNvdXR1cmUuY29t
-END SSL SESSION PARAMETERS-
-BEGIN SSL SESSION PARAMETERS-
MIGNAgEBAgIDAQQCAC8EILcgJcTbarlfw3jpifpmpBZQpBYheYouh2NZp9eoPJUy
BDBs6l+2LMOMI4D/RPQG3mOYbZ7OBcpanTJFaa8zCBV4s6AxtTpIFL2LnxRoJ0uB
I/WhBgIET7wIr6IEAgIBLKQCBACmGgQYd2VibWFpbC5qdWljeWNvdXR1cmUuY29t
-END SSL SESSION PARAMETERS-
2012/05/22 17:44:15| fwdNegotiateSSL: Error negotiating SSL connection on FD 
13: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed (1/-1/0)
2012/05/22 17:44:15| TCP connection to 10.150.2.15/443 failed
2012/05/22 17:44:15| fwdNegotiateSSL: Error negotiating SSL connection on FD 
13: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed (1/-1/0)

From the packet capture, the internal Exchange server reset the connection 
from the squid proxy server by either Alert (Level: Fatal, Description: 
Unknown CA) when I used above official certificates or Alert (Level: Fatal, 
Description: Certificate Unknown) when I used internal CA signed certificate 
after initial https handshaking between squid and exchange server through 
https connection. Can anyone tell me how do I correctly configure cache_peer 
statement to make it work? 


Thanks in advance.

Ryan Jiang




This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.



Re: [squid-users] refresh_pattern dynamic content doubts?

2012-05-22 Thread Beto Moreno
On Sun, May 20, 2012 at 12:57 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 20/05/2012 4:52 p.m., Beto Moreno wrote:

 Hi.

 I have read in the doc that squid default setup is using the old way
 to handle dynamic content:

 case A
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY

 And for the new way for this is using the next settings:
 case B
 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 refresh_pattern .            0 20% 4320

 Some sites I had seen they use things like:
 case C
 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 refresh_pattern -i \.index.(html|htm)$ 1440 90% 40320
 refresh_pattern -i \.(html|htm|css|js)$ 1440 90% 40320
 refresh_pattern .            0 20% 4320

 the old way in your experience is no longer the right way for this?


 There is no right/wrong here.

 HTTP/1.0 specification is clear that dynamic content created by CGI scripts
 is *very likely* unsafe to cache *unless* the script emits Cache-Control
 headers.

 The old way was to simply not cache anything which came from a dynamic
 script generator.

 The refresh_pattern rules are only used for the objects which have no
 cache-control (ie the unsafe requests) and -i (/cgi-bin/|\?) 0 0% 0 is a
 heuristic rule crafted specifically to match the dynamic content criteria
 and prevent that unsafe content being cached.

 The new way permits caching whenever the dynamic responses created by modern
 script languages send cache-controls. All the modern dynamic websites are
 cacheable (their script engines emit cache-control) and using ?, so the old
 way would prevent caching. Leaving ISP with 20% cache HIT ratios. Moving to
 the new rule gains a few % in HIT ratio without much risk.


 What is the different between case B and case C?
 which is better?


 There is no better. Everything in refresh_pattern is relative to the
 specific traffic pattern going through a specific proxy.

 You can tune it perfectly for todays traffic, and a new website becomes
 popular tomorrow that uses entirely different patterns. Or the popular
 website you are trying to cache changes their headers.



 for dynamic content is the only settings we have?(I don't care about
 youtube or streaming).


 The thing to understand is that to squid there is no distinction between
 dynamic and static content. It is all just content. *individual* objects
 have headers (or not) which indicate its *individual* cacheability.

 refresh_pattern directive is a blunt-object regex pattern applied
 universally to all requests to estimate cacheability time for objects which
 have no specific mention of lifetime sent by the server.
 cache directive is a sledge hammer to prevent caching or particular ACL
 matching requests.




 exist a formula to setup min/max percent?


 No. They are the *input* values to a formula for calculating expiry time.
 They are how long *you want* to store any object which matches the regex.


 Amos

Thanks for your great explanation, I'm working on this settings and
see which of them give more from my cache, thanks again!!!


[squid-users] dynamin content pattern_refresh.

2012-05-22 Thread Beto Moreno
 I had been working on the settings:

 refresh_pattern.

 The doc say that is better for the new websites that use dynamic
content and a friend here at the list explain me the difference.

 My test was simple:

 use 2 browsers: firefox/iexplore.
 Run the test twice for each site.

 first run
 firefox site1, site2,site3,site4
 iexplore site1, site2,site3,site4

 run ccleaner, repeat the test.

 run srg to get my squid-cache peformance and free-sa.

 They where 3 settings I try and make the same test.

 NOTE: every time I start a setting, I delete my cache, clean my logs
and start from 0.

 setting 1 default settings
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY

 setting 2  new way:
 disable the old way:

 #acl QUERY urlpath_regex cgi-bin \?
 #cache deny QUERY
 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 refresh_pattern .0 20% 4320

  setting 2:

refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern .0 20% 4320

 Them after I finish my test I start reviewing my logs and compare,
the sites I use was:

yahoo.com
osnews.com
frontera,info(local news paper)
noticias,nvs.com
centos.org

 I didn't interact with the site, just get to the first page, finish
loading and done, continue with the next one.

Once I check my reports I didn't see to much difference, I found just
1 log that the old way didn't cache 1 thing, check:

setting 1/2  have this:

1337667655.898  0 192.168.50.100 TCP_MEM_HIT/200 21280 GET
http://www.frontera.info/WebResource.axd? - NONE/-
application/x-javascript

setting 1 TCP_MISS.

Example of part my logs:

1337667655.596 43 192.168.50.100 TCP_MISS/302 603 GET
http://frontera.info/ - DIRECT/216.240.181.163 text/html
1337667655.748 54 192.168.50.100 TCP_MISS/200 1454 GET
http://www.frontera.info/HojasEstilos/Horoscopos.css -
DIRECT/216.240.181.163 text/css
1337667655.749 52 192.168.50.100 TCP_MISS/200 1740 GET
http://www.frontera.info/Includes/Controles/LosEconomicos.css -
DIRECT/216.240.181.163 text/css
1337667655.749 49 192.168.50.100 TCP_MISS/200 1557 GET
http://www.frontera.info/Includes/Controles/ReporteroCiudadano.css -
DIRECT/216.240.181.163 text/css
1337667655.754 54 192.168.50.100 TCP_MISS/200 1697 GET
http://www.frontera.info/Includes/Controles/Elementos.css -
DIRECT/216.240.181.163 text/css
1337667655.780 24 192.168.50.100 TCP_MISS/200 1406 GET
http://www.frontera.info/Includes/Controles/Finanzas.css -
DIRECT/216.240.181.163 text/css
1337667655.817124 192.168.50.100 TCP_MISS/200 21639 GET
http://www.frontera.info/HojasEstilos/Estilos2009.css -
DIRECT/216.240.181.163 text/css
1337667655.898  0 192.168.50.100 TCP_MEM_HIT/200 21280 GET
http://www.frontera.info/WebResource.axd? - NONE/-
application/x-javascript
1337667655.903 20 192.168.50.100 TCP_MISS/200 1356 GET
http://www.frontera.info/Interactivos/lib/jquery.jcarousel.css -
DIRECT/216.240.181.163 text/css
1337667655.907308 192.168.50.100 TCP_MISS/200 116552 GET
http://www.frontera.info/Home.aspx - DIRECT/216.240.181.163 text/html
1337667655.935 23 192.168.50.100 TCP_MISS/200 3934 GET
http://www.frontera.info/Interactivos/skins/fotos/skin.css -
DIRECT/216.240.181.163 text/css
1337667655.966 27 192.168.50.100 TCP_MISS/200 3995 GET
http://www.frontera.info/Interactivos/skins/elementos/skin.css -
DIRECT/216.240.181.163 text/css
1337667655.971 23 192.168.50.100 TCP_MISS/200 4260 GET
http://www.frontera.info/HojasEstilos/ui.tabs.css -
DIRECT/216.240.181.163 text/css
1337667655.972 24 192.168.50.100 TCP_MISS/200 4953 GET
http://www.frontera.info/HojasEstilos/thickbox.css -
DIRECT/216.240.181.163 text/css
1337667655.993 21 192.168.50.100 TCP_MISS/200 4380 GET
http://www.frontera.info/js/finanzas.js - DIRECT/216.240.181.163
application/x-javascript
1337667655.997 47 192.168.50.100 TCP_MISS/200 9341 GET
http://www.frontera.info/Interactivos/lib/jquery.jcarousel.pack.js -
DIRECT/216.240.181.163 application/x-javascript
1337667656.023 25 192.168.50.100 TCP_MISS/200 4239 GET
http://www.frontera.info/videos/external_script.js -
DIRECT/216.240.181.163 application/x-javascript

3 settings same TCP_MISS.

I was thinking that maybe I will get more TCP_HIT, MEM_HIT, but no.
noticiasmvs.com a lot HIT's but with the 3 settings.

do this site disable caching their site? exist a way to find out?
what could cause to still get a lot of MISS?
where my settings wrong?
my test was not the best way?
how can I see if this new settings make a difference?

Any input will be appreciated, thanks for your time!!!

I'm using squid 2.7.x