Re: [squid-users] SSL Bump Issue

2016-03-06 Thread Ali Jawad
Hi Amos
Thanks for the elaborate reply, I highly appreciate it. I did flush
iptables and re-applied from scratch, see :

[root@kgoDcyTx9 ~]# iptables -nL -t nat

Chain PREROUTING (policy ACCEPT)

target prot opt source   destination

ACCEPT tcp  --  162.220.xx.xx0.0.0.0/0   tcp dpt:443

ACCEPT tcp  --  162.220.xx.xx0.0.0.0/0   tcp dpt:80

REDIRECT   tcp  --  0.0.0.0/00.0.0.0/0   tcp dpt:80
redir ports 3128

REDIRECT   tcp  --  0.0.0.0/00.0.0.0/0   tcp dpt:443
redir ports 3129


Chain POSTROUTING (policy ACCEPT)

target prot opt source   destination

MASQUERADE  all  --  0.0.0.0/00.0.0.0/0


Chain OUTPUT (policy ACCEPT)

target prot opt source   destination

[root@kgoDcyTx9 ~]# iptables -nL

Chain INPUT (policy ACCEPT)

target prot opt source   destination


Chain FORWARD (policy ACCEPT)

target prot opt source   destination


Chain OUTPUT (policy ACCEPT)

target prot opt source   destination



The problem is I am still getting the same freaking loop : see below
please, any more input to try please ?


The following error was encountered while trying to retrieve the URL:
https://162.220.xx.xx/* <https://162.220.244.7/*>

*Connection to 162.220.xx.xx failed.*

The system returned: *(111) Connection refused*

On Mon, Mar 7, 2016 at 4:57 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 7/03/2016 2:50 p.m., Ali Jawad wrote:
> > Hi
> > Pardon me if I am mistaken but isnt it the case that 1 :
> >
> > iptables -t nat -A PREROUTING -p tcp  --dport 443 --destination
> > 162.220.xx.xx -j REDIRECT --to-ports 3129
> > The rule above would only match for the IP of squid and squid should be
> > heading to the actual IP of the site in question which is not on the same
> > server
>
> Squid itself is *never* a valid destination IP on intercepted traffic.
> The purpose of the REDIRECT/DNAT is to make it a destination when it did
> not start that way.
>
> If you meant to write " ! --destination", you would be correct. However
> the difficulty you already had in using the '!' correctly is a good
> reason why we dont demo that way. Its just plain difficult for beginners
> to understand whats going on (and some experts even).
>
> Also, the ! mechanism does not cope well with multiple IPs on the Squid
> machine. In the modern Internet every machine in existence always has a
> minimum of between 3 and 6 IPs, maybe more if the admin active assigns
> multiple global IPs. They all need to be excluded for the protection to
> be fully effective.
>
>
> >
> > and 2 :
> >
> > If Squid is intercepting the PREROUTING chain would not apply anymore, as
> > traffic passing through local daemons goes through OUTPUT and POSTROUTING
> > chains
>
> If the packets stayed within the Squid machine that would be right.
> However outgoing packets with Squid IP as the destination can reach the
> switch to which Squid is plugged in and "bounce" right back in through
> all the normal PREROUTING logics. Infinite loop and very much pain
> trying to figure out what is going on.
>
> >
> > As for
> >
> > iptables -t nat -A PREROUTING -s $SQUIDIP -p tcp --dport 80 -j ACCEPT
> >
>
> Both the -s parameter here and the mangle table rule are pre-emptively
> truncating the NAT loop so that the packets end up being routed normally
> instead of diverted into that Squid intercept port. They also
> simultaneously prevent external attacks on the NAT system (and Squid)
> from remote clients.
>
> As mentioned above the --destination way(s) of doing things both does
> not scale to all the IPs on the current machine, and is far less easy
> for beginners to understand. So it is a multiple-win situation to do it
> the way we demo.
>
> >
> > All traffic set to ACCEPT ..thanks !
>
>
> Not all traffic hopefully. Just the stuff outgoing / generated by Squid
> itself :-P
>
> Amos
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump Issue

2016-03-04 Thread Ali Jawad
Hi Amos

Thanks for your input, I did recompile

See :

Squid Cache: Version 3.5.15-20160302-r14000

Service Name: squid

configure options:  '--prefix=/squid' '--includedir=/squid/usr/include'
'--enable-ssl-crtd' '--datadir=/squid/usr/share' '--bindir=/squid/usr/sbin'
'--libexecdir=/squid/usr/lib/squid' '--localstatedir=/squid/var'
'--sysconfdir=/squid/etc/squid' '--enable-arp-acl'
'--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,LDAP,NCSA,PAM,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake'
'--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos'
'--enable-external-acl-helpers=file_userip,LDAP_group,session,unix_group,wbinfo_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--enable-esi' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=64000' '--with-dl' '--with-openssl'
'--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu'
'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'
'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fpie'
'CXXFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fpie'
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'
'--enable-ltdl-convenience' '--disable-ipv6'


Yes the IP in question is my squid IP, I am still getting the same error,
it is as if squid sends traffic to itself

Only difference is that I see this in access log now

1457080684.426  0 84.208.223.203 TAG_NONE/200 0 CONNECT
162.220.xx.xx:443 - ORIGINAL_DST/162.220.xx.xx -

Not sure if this means anything .


Regards

On Fri, Mar 4, 2016 at 6:39 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 4/03/2016 11:57 a.m., Ali Jawad wrote:
> > Hi
> > I am using Squid
> >
> > [root@kgoDcyTx9 squid]# /squid/sbin/squid  -v
> >
> > Squid Cache: Version 3.4.9
>
>
> When using SSL-Bump functionality first port of call is to ensure you
> are using the latest release.
>
> Today that is 3.5.15 (though I recommend the snapshot tarball instead of
> the main one). Or 4.0.7 beta.
>
>
> >
> > Config Options
> >
> >
> > https_port 3129 intercept ssl-bump generate-host-certificates=on
> > dynamic_cert_mem_cache_size=4MB cert=/squid/etc/squid/ssl_cert/myca.pem
> > key=/squid/etc/squid/ssl_cert/myca.pem
> >
> >
> 
>
> >
> > Iptables Rule
> >
> > iptables -t nat -A PREROUTING -p tcp  --dport 443 --destination
> > 162.220.xx.xx -j REDIRECT --to-ports 3129
> >
>
> So what happens to the Squid traffic going to port 443 ?
>
> >
> > The problem :
> >
> > There are no certificate errors in the cache log and access log appears
> to
> > log the requested URL, the problem is that Squid shows the error below,
> > from the looks of it Squid is trying to send the request to itself on its
> > own  IP, my assumption is that Squid is not able to detect the proper
> > destination during bump "through a config fault of my own" or a missing
>
> The machine NAT system tells Squid what the destination is supposed to be.
>
> > step. Please advice :
> >
> > The following error was encountered while trying to retrieve the URL:
> > ://162.220.xx.xx:443
> > <https://ipv6_1.lagg0.c052.lhr004.ix.nflxvideo.net/://162.220.244.7:443>
> >
> > *Connection to 162.220.244.7 failed.*
> >
>
> Is "162.220.244.7" your Squid IP?
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL Bump Issue

2016-03-03 Thread Ali Jawad
Hi
I am using Squid

[root@kgoDcyTx9 squid]# /squid/sbin/squid  -v

Squid Cache: Version 3.4.9

configure options:  '--prefix=/squid' '--includedir=/squid/usr/include'
'--enable-ssl-crtd' '--datadir=/squid/usr/share' '--bindir=/squid/usr/sbin'
'--libexecdir=/squid/usr/lib/squid' '--localstatedir=/squid/var'
'--sysconfdir=/squid/etc/squid' '--enable-arp-acl'
'--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake'
'--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos'
'--enable-external-acl-helpers=file_userip,LDAP_group,session,unix_group,wbinfo_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--enable-esi' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=64000' '--with-dl' '--with-openssl'
'--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu'
'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'
'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fpie'
'CXXFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fpie'
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'
'--enable-ltdl-convenience' '--disable-ipv6'


Config Options


https_port 3129 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/squid/etc/squid/ssl_cert/myca.pem
key=/squid/etc/squid/ssl_cert/myca.pem


#always_direct allow all

ssl_bump server-first all

sslproxy_cert_error allow all

sslproxy_flags DONT_VERIFY_PEER

#sslproxy_cert_error deny all

#sslproxy_flags DONT_VERIFY_PEER


sslcrtd_program /squid/usr/lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB

sslcrtd_children 8 startup=1 idle=1


Iptables Rule

iptables -t nat -A PREROUTING -p tcp  --dport 443 --destination
162.220.xx.xx -j REDIRECT --to-ports 3129


The problem :

There are no certificate errors in the cache log and access log appears to
log the requested URL, the problem is that Squid shows the error below,
from the looks of it Squid is trying to send the request to itself on its
own  IP, my assumption is that Squid is not able to detect the proper
destination during bump "through a config fault of my own" or a missing
step. Please advice :

The following error was encountered while trying to retrieve the URL:
://162.220.xx.xx:443


*Connection to 162.220.244.7 failed.*

The system returned: *(111) Connection refused*
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] access-lists from mysql ?

2013-01-24 Thread Ali Jawad
Hi
Checking the db and echoing OK or ERR based on that is easy for me to
implement in php, for now I did add the following :

external_acl_type MyAclHelper %SRC  /etc/squid/myaclhelper.php

myaclhelper.php only contains
?php
echo OK;
?

Later on I want to replace that with a mysqldb check and read stdin to
obtain %SRC, but for now I just want this to work, when squid starts I
get the following in the log :

The MyAclHelper helpers are crashing too rapidly, need help!

Any help with this simple setup please ?

Regards


On Thu, Jan 24, 2013 at 12:53 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 1/24/2013 12:13 AM, Ali Jawad wrote:

 Hi
 Is it possible to load access-lists from a database ? I.e. I want to
 read all the allowed src IPs from a database, all the examples I could
 fine are around user authentication and not IP access-lists. If it is
 possible can you please show me a few pointers ? Any example config /
 howto ?
 Thanks


 For this kind of setup you'd better use an external_acl helper with DB which
 is pretty simple to implement.

 Regards,
 Eliezer


Re: [squid-users] access-lists from mysql ?

2013-01-24 Thread Ali Jawad
Based on the docu I should setup something similar to

external_acl_type MyAclHelper %SRC %LOGIN %{Host}

/usr/local/squid/libexec/myaclhelper

However I only want to authenticate based on src IP so I want to use :

external_acl_type MyAclHelper %SRC

/usr/local/squid/libexec/myaclhelper

I am not getting any errors right now, but I am getting access denied,
although the script echoes OK, and it works using stdin

Regards

On Thu, Jan 24, 2013 at 3:44 PM, Ali Jawad alijaw...@gmail.com wrote:
 Thanks for that I did change the script to

 ?php
 $f = fopen( 'php://stdin', 'r' );

 while( $line = fgets( $f ) ) {
   echo  OK;
 }


 ?

 while testing in command line using :

  /usr/bin/php  myaclhelper.php

 Each time I press enter it returns OK, it keeps running and does not
 exit. However in squid log I get :

  The MyAclHelper helpers are crashing too rapidly, need help!

 What I think is wrong is that I need to identify what needs to handle
 the script, I.e. how does squid know this is a php script and not a
 perl script ?

 Regards


 On Thu, Jan 24, 2013 at 12:02 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 24/01/2013 10:44 p.m., Ali Jawad wrote:

 Hi
 Checking the db and echoing OK or ERR based on that is easy for me to
 implement in php, for now I did add the following :

 external_acl_type MyAclHelper %SRC  /etc/squid/myaclhelper.php

 myaclhelper.php only contains
 ?php
 echo OK;
 ?

 Later on I want to replace that with a mysqldb check and read stdin to
 obtain %SRC, but for now I just want this to work, when squid starts I
 get the following in the log :

 The MyAclHelper helpers are crashing too rapidly, need help!

 Any help with this simple setup please ?



 You need a loop.

 Please read the intro section from the link below on how helpers operate
 under Squid:
 http://wiki.squid-cache.org/Features/AddonHelpers#How_do_the_helpers_communicate_with_Squid.3F

 (I have just updated it to clarify the I/O criteria helpers must follow and
 what the consequences are).

 Amos


[squid-users] access-lists from mysql ?

2013-01-23 Thread Ali Jawad
Hi
Is it possible to load access-lists from a database ? I.e. I want to
read all the allowed src IPs from a database, all the examples I could
fine are around user authentication and not IP access-lists. If it is
possible can you please show me a few pointers ? Any example config /
howto ?
Thanks


[squid-users] Squid transparent proxy woes

2012-12-23 Thread Ali Jawad
Hi
I am trying to setup a transparent proxy for my own use which I can
use to access geo blocked services, I have tried with 3.1.10 and
3.3.0.1 and I am facing different problems in both cases. Let me first
describe the network setup

my lan -- GW--- Internet Dedicated Server-- Destination sites

I do point to the sites I want to access using DNS, I.e. I setup site
xyz.com to point to my DNS server on my local LAN.  This did work fine
on 3.1.10 but not with SSL, I  was adviced to use latest SQUID however
on latest SQUID I am facing different problems as neither 80 or 443
are working. I am using http_access allow all for testing purposes.

First Case
Squid on a dedicated server CentOS 6, Squid version 3.1.0
Squid is the default repo install in this case

For http traffic this works just fine however for https traffic, once
I get the SSL security error page in the browser, the traffic leaves
the squid server in http which causes the destination site to redirect
to https however, the squid server does send the traffic again in http
instead of https and this causes a loop and the browser does through
the related error.


Second Case
Squid on a dedicated server CentOS 6, Squid version 3.3.0.1
Squid Cache: Version 3.3.0.1
configure options:  '--enable-ssl' '--prefix=/usr/local/squid2'
'--with-large-files' '--enable-linux-netfilter'
--enable-ltdl-convenience

As said I am allowing all traffic, using the same config as above,
both http and https traffic do give access denied errors in the
browser, logs however do only show miss and not denied.

The relevant lines of the config are :

http_port 0.0.0.0:8128
http_port 0.0.0.0:880 transparent
https_port 0.0.0.0:8443 transparent  ssl-bump
cert=/etc/squid/proxy.example.com.cert
key=/etc/squid/proxy.example.com.key

and iptables looks as follows :

REDIRECT   tcp  --  0.0.0.0/0xx.xx.xx.xx  tcp dpt:443
redir ports 8443
REDIRECT   tcp  --  0.0.0.0/0xx.xx.xx.xx  tcp dpt:80
redir ports 880

I am at the end of my wits here, please advice.

Regards


Re: [squid-users] Access denied on transparent after upgrade 3.1.x to 3.3

2012-12-20 Thread Ali Jawad
Hi
I do intercept traffic using iptables, problem is same config works
for squid 3.1.2, I did remove all access rules and ended up with the
config below but I still get an access denied error.

always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all


http_port 0.0.0.0:80 transparent
http_port 0.0.0.0:8080 transparent
http_port 0.0.0.0:3128
#http_port 127.0.0.1:3080 intercept
#https_port 0.0.0.0:443 transparent  intercept
cert=/etc/squid/proxy.example.com.cert
key=/etc/squid/proxy.example.com.key
#https_port 0.0.0.0:443 transparent  ssl-bump
cert=/etc/squid/proxy.example.com.cert
key=/etc/squid/proxy.example.com.key

http_access allow all

coredump_dir /usr/local/squid/var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
#debug_options ALL,3


Regards

On Wed, Dec 19, 2012 at 9:45 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 egards,
 Eliezer


Re: [squid-users] Access denied on transparent after upgrade 3.1.x to 3.3

2012-12-20 Thread Ali Jawad
Hi
I did miss to point out an important factor, the server is a remote
transparent proxy, in other words

my pc uses a custom dns to point certain sites to proxy server --
Internet Gateway  Transparent proxy with public IP and redirect
port 80 to proxy

Regards

On Thu, Dec 20, 2012 at 11:05 AM, Ali Jawad alijaw...@gmail.com wrote:
 Hi
 I do intercept traffic using iptables, problem is same config works
 for squid 3.1.2, I did remove all access rules and ended up with the
 config below but I still get an access denied error.

 always_direct allow all
 ssl_bump allow all
 sslproxy_cert_error allow all


 http_port 0.0.0.0:80 transparent
 http_port 0.0.0.0:8080 transparent
 http_port 0.0.0.0:3128
 #http_port 127.0.0.1:3080 intercept
 #https_port 0.0.0.0:443 transparent  intercept
 cert=/etc/squid/proxy.example.com.cert
 key=/etc/squid/proxy.example.com.key
 #https_port 0.0.0.0:443 transparent  ssl-bump
 cert=/etc/squid/proxy.example.com.cert
 key=/etc/squid/proxy.example.com.key

 http_access allow all

 coredump_dir /usr/local/squid/var/cache/squid

 # Add any of your own refresh_pattern entries above these.
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320
 #debug_options ALL,3


 Regards

 On Wed, Dec 19, 2012 at 9:45 PM, Eliezer Croitoru elie...@ngtech.co.il 
 wrote:
 egards,
 Eliezer


Re: [squid-users] Re: Too many lpops with https

2012-12-19 Thread Ali Jawad
Thank you all for your help, I am running an RPM install so I think
the bug fix might now work ? Or will it, I am not sure whether the
patch can be applied to an RPM installation, if not I will compile
from source, what version do you think I should compile and if I do
compile it the recommended version do I still need to apply the patch
?
Regards

On Wed, Dec 19, 2012 at 1:33 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 19/12/2012 8:35 a.m., Ahmed Talha Khan wrote:

 Hey Ali,

 You seem to have hit upon a bug in the squid code-base. I am  copying
 a patch to fix this bug(somehow i am unable to add attachment). If you
 are unable to apply the patch directly,because of code version, just
 apply it manually. Its a one liner.


 Let us know how it goes.


 Re-opens other bugs. Namely, clients being delivered error pages whenever
 you reconfigure Squid.

 PLease upgrade your Squid to 3.1.21 or later instead.

 Or apply the final patch from earlier his year:
 http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10459.patch

 Amos


Re: [squid-users] Re: Too many lpops with https

2012-12-19 Thread Ali Jawad
Hi Amos
I did compile 3.2.5 which is the latest stable release, if you think I
should go for 3.3 please let me know. Problem is I am getting access
denied for all pages although I did set allow to all directives I did
use the same config as before.

See below config please

http://pastebin.com/vEWgsPkz


Regards

On Wed, Dec 19, 2012 at 12:34 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 This problem being a known problem and long fixed, you will not even have to
 patch the latest sources.


Re: [squid-users] Re: Too many lpops with https

2012-12-19 Thread Ali Jawad
Actually this only happens if transparent is set on the port. It does
work fine for the other ports, but I had to remove acl manager proto
cache_object from my config during upgrade but I did allow localhost
and to_localhost.
Regards

On Wed, Dec 19, 2012 at 1:40 PM, Ali Jawad alijaw...@gmail.com wrote:
 Hi Amos
 I did compile 3.2.5 which is the latest stable release, if you think I
 should go for 3.3 please let me know. Problem is I am getting access
 denied for all pages although I did set allow to all directives I did
 use the same config as before.

 See below config please

 http://pastebin.com/vEWgsPkz


 Regards

 On Wed, Dec 19, 2012 at 12:34 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 This problem being a known problem and long fixed, you will not even have to
 patch the latest sources.


[squid-users] Access denied on transparent after upgrade 3.1.x to 3.3

2012-12-19 Thread Ali Jawad
Hi
I did upgrade from squid 3.1.x to 3.3 and tried 3.2.5 in between
problem is now that i have upgraded transparent proxy always returns
access denied even if I do set src all to allowed. Please see a sample
config below

http://pastebin.com/vEWgsPkz

On 3.1.x the transparent proxy did work just fine for http_port, I did
upgrade due to issue with https_port, but since the upgrade even the
http_port for transparent always returns denied.

Please advice.

Regards


[squid-users] Re: Access denied on transparent after upgrade 3.1.x to 3.3

2012-12-19 Thread Ali Jawad
My compile options are :

[root@v01-chi squid-3.3.0.2]# /usr/local/squid2/sbin/squid -v
Squid Cache: Version 3.3.0.2
configure options:  '--enable-ssl' '--enable-large-files'
'--enable-linux-netfilter' '--prefix=/usr/local/squid2/'
--enable-ltdl-convenience


On Wed, Dec 19, 2012 at 4:05 PM, Ali Jawad alijaw...@gmail.com wrote:
 Hi
 I did upgrade from squid 3.1.x to 3.3 and tried 3.2.5 in between
 problem is now that i have upgraded transparent proxy always returns
 access denied even if I do set src all to allowed. Please see a sample
 config below

 http://pastebin.com/vEWgsPkz

 On 3.1.x the transparent proxy did work just fine for http_port, I did
 upgrade due to issue with https_port, but since the upgrade even the
 http_port for transparent always returns denied.

 Please advice.

 Regards


Re: [squid-users] Help with Squid HTTPS proxy

2012-12-18 Thread Ali Jawad
Hi All
I will make use of your suggestions, but this is not just netflix
related, basically whatever site I visit I get this error about
LookupHostIP: Given
Non-IP 'signup.netflix.com': Name or service not known

Of course with the variation of the hostname at hand.

Regards
On Tue, Dec 18, 2012 at 5:58 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 18/12/2012 1:31 p.m., Joshua B. wrote:

 Netflix doesn't work through Squid
 The only option you have to allow Netflix to work through a proxied
 environment without adding exceptions on all your clients, is to put this
 code in your configuration file:

 acl netflix dstdomain .netflix.com
 cache deny netflix

 That allows Netflix to fully work through the proxy.
 Tested and therefore knows it works on my network.


 All that does is prevent *caching* of Netflix objects, all the other proxy
 handling and traffic management is still operating.

 That is a clear sign that your caching rules are causing problems, or that
 the site itself has very broken cache controls. A quick scan of Netflix
 shows a fair knowledge of caching control, geared towards non-caching of
 objects. Which points back at your config being the problem.

  Do you have refresh_pattern with loose regex and ignore-* options forcing
 things to cache which are supposed to not be stored? please check and
 remove.

 Amos


[squid-users] Too many lpops with https

2012-12-18 Thread Ali Jawad
Hi
I am trying to setup a squid proxy with transparent https, but I am
getting Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many
redirects. I am using the default CentOS installation with
--enable-ssl  3.1. http is working fine, for https I get the ssl
certificate error page and then the loop error. My config is pretty
simple and I did try to change from intercept to sslbump and a
combination of both, but nothing of that seems to make any
difference.The problem is the same for all https sites.

See below, the config please :

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl trusted src all   # internal IP from venet0:1 and ISP IP (Cable/DSL)
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow trusted
http_access allow localhost
http_access deny all
http_port 0.0.0.0:3128
http_port 0.0.0.0:8128 transparent
https_port 0.0.0.0:8129 transparent ssl-bump intercept
cert=/usr/local/squid/CA/servercert.pem
key=/usr/local/squid/CA/serverkey.pem
debug_options ALL,3
coredump_dir /var/spool/squid3
cache deny all
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
request_header_access Proxy-Connection deny all
request_header_access X-Forwarded-For deny all
request_header_access Connection deny all
request_header_access Via deny all
forwarded_for off


[squid-users] Re: Too many lpops with https

2012-12-18 Thread Ali Jawad
OK I finally know what the problem is, I did use tcpdump and when I do
make an ssl request squid intercepts it and sends it as http to the
destination website, this causes the website to redirect to https and
then squid in turn makes another http request, I did make a few tests
with different sites an I am sure about this.

Any clues about what I did wrong to cause this ? I did try with
https_port intercept and with ssl-bump + both.

Thanks !

On Tue, Dec 18, 2012 at 12:41 PM, Ali Jawad alijaw...@gmail.com wrote:
 Hi
 I am trying to setup a squid proxy with transparent https, but I am
 getting Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many
 redirects. I am using the default CentOS installation with
 --enable-ssl  3.1. http is working fine, for https I get the ssl
 certificate error page and then the loop error. My config is pretty
 simple and I did try to change from intercept to sslbump and a
 combination of both, but nothing of that seems to make any
 difference.The problem is the same for all https sites.

 See below, the config please :

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32 ::1
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
 acl trusted src all   # internal IP from venet0:1 and ISP IP (Cable/DSL)
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow trusted
 http_access allow localhost
 http_access deny all
 http_port 0.0.0.0:3128
 http_port 0.0.0.0:8128 transparent
 https_port 0.0.0.0:8129 transparent ssl-bump intercept
 cert=/usr/local/squid/CA/servercert.pem
 key=/usr/local/squid/CA/serverkey.pem
 debug_options ALL,3
 coredump_dir /var/spool/squid3
 cache deny all
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320
 request_header_access Proxy-Connection deny all
 request_header_access X-Forwarded-For deny all
 request_header_access Connection deny all
 request_header_access Via deny all
 forwarded_for off


[squid-users] Help with Squid HTTPS proxy

2012-12-17 Thread Ali Jawad
Hi
I am trying to setup an HTTPS transparent proxy with latest stable
squid with --enable-ssl compiled. Problem is that the squid server
returns an error connection refused, but the thing is that it was
trying to connect to itself. I did also check using tcpdump and
actually no https requests are leaving for the destination site, all
https is happening between the browser and the squid server.

2012/12/18 02:39:56.086 kid1| url.cc(385) urlParse: urlParse: Split
URL 'https://signup.netflix.com/global' into proto='https',
host='signup.netflix.com', port='443', path='/global'
2012/12/18 02:39:56.086 kid1| Address.cc(409) LookupHostIP: Given
Non-IP 'signup.netflix.com': Name or service not known

And basically the same happens for other hostnames, however when I do
run nslookup on the squid server the lookup is correct, the
resolv.conf file lists teht Google DNS as the top DNS server.

Config below

http://pastebin.com/Cm29hmXL

Please advice


[squid-users] Issue with setting up local proxy

2011-11-15 Thread Ali Jawad
Hi

I have a server setup with SQUID for 6 users, these users do use the
same system on which squid is installed. Now my problem is that I want
to force all their traffic through SQUID. So I did setup the following
iptable rule on the server :

iptables -t nat -I OUTPUT   -p tcp --dport 80 -j DNAT --to 192.168.0.165:3128

Where 192.168.0.165 is the squid server and the server the users use.

When a user tried to access a denied site, he gets access denied. When
he tried to access a white listed site he gets :

* Unable to forward this request at this time.



Full error http://pastebin.com/ZVz1EGMr

However, when a user puts the proxy in browser after I remove iptable
rule all works fine. I did check always_direct, never_direct and the
other stuff.

Right now I am clueless. This is CentOS 5.5 with SQUID 2.6

Regards


[squid-users] Reverse Proxy on Squid to port 8080

2011-04-25 Thread Ali Jawad
Hi

I have got a reverse proxy that is working just fine, it accepts
requests on port 443 and port 80 and ONLY sends traffic upstream to
port 80 to the apache server listening on localhost.

I use the following config:


https_port 10.14.1.72:443 cert=/etc/squid/self_certs/site.crt
key=/etc/squid/self_certs/site.key defaultsite=site vhost

cache_peer 127.0.0.1 parent 443 80 no-query originserver login=PASS

http_port 10.14.1.72:80 vhost


My problem is the following :

The site should act differently in some occasions based on whether
http or https was requested. So my idea is to setup second http vhost
on apache listening to port 8080 and on that vhost I would server the
https code. So is it possible to use SQUID to :

Send traffic destined for port 443 to localhost:8080
and
Send traffic destined for port 80 to localhost:80 ?

Any hints/ comments are highly appreciated.


[squid-users] Issue with XML requests

2010-01-25 Thread Ali Jawad
Hi
We are developing an application that does send XML requests to our
webserver. We do have a non caching SQUID server on our local network,
when the SQUID server is in use we dont get the result back from the
server. When we dont use the SQUID server we get the result. Although
no  content filtering rules are are in place. If the request is done
through a browser we get the answer

This is the SQUID log for a browser
126735.732   1748 127.0.0.1 TCP_MISS/200 623 GET
http://xyz.com/balance2.php? - DIRECT/87.236.144.25 text/xml
This is the SQUID log for our application
126752.166  60004 127.0.0.1 TCP_MISS/000 0 POST
http://xyz.com/balance2.php - DIRECT/87.236.144.25 -


As for the server itself

This is the log when passing through SQUID with application
sourceIP - - [25/Jan/2010:17:17:44 +] POST /balance2.php HTTP/1.0 200 35
This is the log when NOT passing through SQUID with application
sourceIP - - [25/Jan/2010:17:18:55 +] POST /balance2.php HTTP/1.1 200 82

Can anyone please point me in the right direction ?

Regards


[squid-users] Re: Issue with XML requests

2010-01-25 Thread Ali Jawad
Without SQUID

The packet is


POST /balance2.php HTTP/1.1.
Host: xyz
content-type: application/x-www-form-urlencoded.
Connection: Keep-Alive.
content-length: 36.
.
username=sourceedge2password=123456


With SQUID the request is:


POST /balance2.php HTTP/1.0.
Host: xyz.com.
Content-Type: application/x-www-form-urlencoded.
Content-Length: 36.
Via: 1.1 y.net:3128 (squid/2.6.STABLE5).
X-Forwarded-For: 127.0.0.1.
Cache-Control: max-age=259200.
Connection: keep-alive.

As you can see the argument line is missing and the server returns with:

HTTP/1.1 200 OK.
Date: Mon, 25 Jan 2010 18:19:38 GMT.
Server: Apache/2.2.3 (CentOS).
X-Powered-By: PHP/5.1.6.
Content-Length: 35.
Connection: close.
Content-Type: text/html; charset=UTF-8.
.
Error passing variables (AD err 01)


On Mon, Jan 25, 2010 at 6:30 PM, Ali Jawad alijaw...@gmail.com wrote:
 Hi
 We are developing an application that does send XML requests to our
 webserver. We do have a non caching SQUID server on our local network,
 when the SQUID server is in use we dont get the result back from the
 server. When we dont use the SQUID server we get the result. Although
 no  content filtering rules are are in place. If the request is done
 through a browser we get the answer

 This is the SQUID log for a browser
 126735.732   1748 127.0.0.1 TCP_MISS/200 623 GET
 http://xyz.com/balance2.php? - DIRECT/87.236.144.25 text/xml
 This is the SQUID log for our application
 126752.166  60004 127.0.0.1 TCP_MISS/000 0 POST
 http://xyz.com/balance2.php - DIRECT/87.236.144.25 -


 As for the server itself

 This is the log when passing through SQUID with application
 sourceIP - - [25/Jan/2010:17:17:44 +] POST /balance2.php HTTP/1.0 200 35
 This is the log when NOT passing through SQUID with application
 sourceIP - - [25/Jan/2010:17:18:55 +] POST /balance2.php HTTP/1.1 200 82

 Can anyone please point me in the right direction ?

 Regards



[squid-users] Bungled Conf..SSL

2009-09-21 Thread Ali Jawad
Hi

I did compile 2.7 from source on debian with --enable-ssl support of
course I did install the libssl-dev package for ssl headers first. The
thing is that squid wont start it complains about the https_port line
being BUNGLED.

The error/line is :

FATAL: Bungled squid.conf line 7: https_port 443 10.10.11.11:443
cert=/etc/squid/self_certs/exyz.crt key=/etc/squid/self_certs/xyz.key
protocol=http accel defaultsite=xyz vhost

The relevant configure output is:

app02:/usr/src/squid-2.7.STABLE7# cat output | grep ssl
checking for openssl/err.h... yes
checking for openssl/md5.h... yes
checking for openssl/ssl.h... yes
checking for openssl/engine.h... yes
app02:/usr/src/squid-2.7.STABLE7# cat output | grep SSL
SSL gatewaying using OpenSSL enabled
Using OpenSSL MD5 implementation

I did specifically use --enable-ssl

Please advice.