[squid-users] squid crash on restart

2016-03-09 Thread Alex Samad
Hi

running
 rpm -qa squid
squid-3.5.14-1.el6.x86_64


doing a restart saw this
2016/03/10 14:36:28 kid1| Squid Cache (Version 3.5.14): Exiting normally.
FATAL: Received Segment Violation...dying.
2016/03/10 14:36:28 kid1| storeDirWriteCleanLogs: Starting...

in cache.log

and message log
Mar 10 14:29:38 alcdmz1 squid[31939]: Squid Parent: will start 1 kids
Mar 10 14:29:38 alcdmz1 squid[31939]: Squid Parent: (squid-1) process
31941 started
Mar 10 14:36:28 alcdmz1 kernel: squid[31941]: segfault at 124c ip
0076bfd6 sp 750359c0 error 6 in squid[40+613000]

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] question about ssl_bump

2016-03-09 Thread Alex Samad
On 10 March 2016 at 14:17, Alex Rousskov
 wrote:
>>
>> I am not sure how haveServerName is constructed
>
> It is up to the Squid admin.

Thanks for the replay to the other stuff

I'm the squid admin. I am presuming maybe wrongly that this is test to
see if squid has worked out a serverName.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] question about ssl_bump

2016-03-09 Thread Alex Samad
from http://wiki.squid-cache.org/Features/SslPeekAndSplice

# Better safe than sorry:
# Terminate all strange connections.
ssl_bump splice serverIsBank
ssl_bump bump haveServerName
ssl_bump peek all
ssl_bump terminate all

I am not sure how haveServerName is constructed

I read this as
1) splice the connection if it meets ACL serverIsBank
2) bump the connection (MTM) if acl haveServerName is meet
3) try and peek the ssl connection . which I understands is  start MTM
whilst keeping the ability to splice. I presume this means look at the
client cert and the server cert ? so you get more info But this
doesn't stop the process ?
4)  terminate all that get here. again nothing stops at #3 it just
gathers more info ?

Is my understanding right ???
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Youtube "challenges"

2016-02-23 Thread Alex Samad
Sounds like a controlled at home environment

why not implement ssl bump ?

On 24 February 2016 at 00:40, Chris Horry  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
>
> On 2/23/2016 08:39, Antony Stone wrote:
>> On Tuesday 23 February 2016 at 13:57:52, Chris Horry wrote:
>>
>>> On 2/23/2016 00:01, Darren wrote:
 Hi all

 AI am putting together a config to allow the kids to access
 selected videos in YouTube from a page of links on a local
 server.
>>>
>>> You might want to look into a web filter like Dan's Guardian
>>> that integrates with Squid.
>>
>> You have a working recipe for getting Dan's Guardian to filter
>> HTTPS?
>>
> Never tried it myself I'm afraid, I took an all or nothing approach to
> filtering YouTube when my kids were smaller.
>
> Chris
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iEYEARECAAYFAlbMYWQACgkQnAAeGCtMZU6fPgCfWEvdxNrVL0eEqkMuGrsXq1Bl
> xuYAoLRjlJS8drIUvss6Rnfayrm1xc7N
> =NZDF
> -END PGP SIGNATURE-
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump

2016-02-08 Thread Alex Samad
got the ACL backwards

# ssl-bump
# pick up from a file
#acl NoBump ssl::server_name   /etc/squid/lists/noSSLPeek.lst

# Alex test machine
acl testIP src  10.172.208.105

# for testing
acl haveServerName ssl::server_name .google.com


# Do no harm:
# Splice indeterminate traffic.
ssl_bump splice ! testIP
ssl_bump splice NoBump
ssl_bump bump haveServerName
ssl_bump peek all
ssl_bump splice all

On 9 February 2016 at 10:52, Alex Samad <a...@samad.com.au> wrote:
> Hi
>
> Starting to look at ssl-bump found
> http://wiki.squid-cache.org/Features/SslPeekAndSplice
> http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit
>
> I gather I need to modify my http_port to look someting like
>
> http_port 3128 ssl-bump \
>   cert=/etc/squid/ssl_cert/myCA.pem \
>   generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
>
>
> from http_port 3128
>
> I have generated a int CA of our internal CA, the cert option above
> points to a pem file. does that have pub and private in there ?
>
> I wanted to tested this on a specif ip so using
>
> # pick up from a file
> acl NoBump ssl::server_name   /etc/squid/lists/noSSLPeek.lst
> acl NoBump src  
>
> # for testing
> acl haveServerName ssl::server_name google.com
>
>
> # Do no harm:
> # Splice indeterminate traffic.
> ssl_bump splice NoBump
> ssl_bump bump haveServerName
> ssl_bump peek all
> ssl_bump splice all
>
>
> The way i read this is if I come from an address other then the
> testip. the connect goes through.
> But for the test ip I try and peek and if not splice .
>
> Create and initialize SSL certificates cache directory <<< where do I
> set this directory in squid config ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump

2016-02-08 Thread Alex Samad
Hi

Got this working. wondering what the benefits are, wandering around
google, you tube, facebook not seeing much cache.   Atleast I can pass
downloads through clamav...

Are other people seeing caching of these sites ??


On 9 February 2016 at 11:09, Alex Samad <a...@samad.com.au> wrote:
> got the ACL backwards
>
> # ssl-bump
> # pick up from a file
> #acl NoBump ssl::server_name   /etc/squid/lists/noSSLPeek.lst
>
> # Alex test machine
> acl testIP src  10.172.208.105
>
> # for testing
> acl haveServerName ssl::server_name .google.com
>
>
> # Do no harm:
> # Splice indeterminate traffic.
> ssl_bump splice ! testIP
> ssl_bump splice NoBump
> ssl_bump bump haveServerName
> ssl_bump peek all
> ssl_bump splice all
>
> On 9 February 2016 at 10:52, Alex Samad <a...@samad.com.au> wrote:
>> Hi
>>
>> Starting to look at ssl-bump found
>> http://wiki.squid-cache.org/Features/SslPeekAndSplice
>> http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit
>>
>> I gather I need to modify my http_port to look someting like
>>
>> http_port 3128 ssl-bump \
>>   cert=/etc/squid/ssl_cert/myCA.pem \
>>   generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
>>
>>
>> from http_port 3128
>>
>> I have generated a int CA of our internal CA, the cert option above
>> points to a pem file. does that have pub and private in there ?
>>
>> I wanted to tested this on a specif ip so using
>>
>> # pick up from a file
>> acl NoBump ssl::server_name   /etc/squid/lists/noSSLPeek.lst
>> acl NoBump src  
>>
>> # for testing
>> acl haveServerName ssl::server_name google.com
>>
>>
>> # Do no harm:
>> # Splice indeterminate traffic.
>> ssl_bump splice NoBump
>> ssl_bump bump haveServerName
>> ssl_bump peek all
>> ssl_bump splice all
>>
>>
>> The way i read this is if I come from an address other then the
>> testip. the connect goes through.
>> But for the test ip I try and peek and if not splice .
>>
>> Create and initialize SSL certificates cache directory <<< where do I
>> set this directory in squid config ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL help

2016-02-04 Thread Alex Samad
HI

Back to my Windows update issues :)


1454566851.333 63 10.172.208.208 TCP_MISS/206 6520 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/secu/2015/11/windows6.1-kb3109103-x64_66e00af753e3faae5d558534711af7dc29a9160d.psf
- HIER_DIRECT/203.213.73.25 application/octet-stream


Not sure how this go through.

it matches
acl windowsupdate_url url_regex -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]


this allows https_access
http_access allow nonAuthDom

# never Direct
never_direct deny notwindowsupdate_url### Doesn't match
never_direct deny MsUpdateAllowed windowsupdate_url ### doesn't match
never_direct allow !DMZSRV windowsupdate_url  ## should match this

on top of that i have

# miss_access
# http://www.squid-cache.org/Doc/config/miss_access/
# Some MS urls are need and can't be cached !
miss_access allow notwindowsupdate_url  ## doesn't match
# Deny Access to MS Update only from DMZ boxes
miss_access allow MsUpdateAllowed windowsupdate_url ## doesn't match
miss_access deny !DMZSRV windowsupdate_url ## should match


So that request should never have been allowed out ... By out I mean
the quest going to the internet from that client .

Have I missed something ??




 config
auth_param negotiate program /usr/bin/ntlm_auth
--helper-protocol=gss-spnego --configfile /etc/samba/smb.conf-squid
auth_param negotiate children 20 startup=0 idle=3
auth_param negotiate keep_alive on
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp --configfile
/etc/samba/smb.conf-squid
auth_param ntlm children 20 startup=0 idle=3
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic --configfile
/etc/samba/smb.conf-squid
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
acl sblMal dstdomain -i "/etc/squid/lists/squid-malicious.acl"
acl sblPorn dstdomain -i "/etc/squid/lists/squid-porn.acl"
acl localnet src 10.32.80.0/24
acl localnet_auth src 10.32.0.0/14
acl localnet_auth src 10.172.0.0/16
acl localnet_auth src 10.43.200.51/32
acl localnet_guest src 10.172.202.0/24
acl localnet_appproxy src 10.172.203.30/32
acl sblYBOveride dstdomain -i "/etc/squid/lists/yb-nonsquidblacklist.acl"
acl nonAuthDom dstdomain -i "/etc/squid/lists/nonAuthDom.lst"
acl nonAuthSrc src "/etc/squid/lists/nonAuthServer.lst"
acl FTP proto FTP
acl DMZSRV src 10.32.20.110
acl DMZSRV src 10.32.20.111
acl MsUpdateAllowed src 10.32.70.100
acl DirectExceptions url_regex -i
^http://(www.|)smh.com.au/business/markets-live/.*
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
acl SQUIDSPECIAL urlpath_regex ^/squid-internal-static/
acl AuthorizedUsers proxy_auth REQUIRED
acl icp_allowed src 10.32.20.110/32
acl icp_allowed src 10.32.20.111/32
acl icp_allowed src 10.172.203.30/32
acl icp_allowed src 10.172.203.34/32
acl windowsupdate_url url_regex -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]
acl windowsupdate_url url_regex -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]
acl windowsupdate_url url_regex -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]
acl notwindowsupdate_url dstdomain (ctldl|crl).windowsupdate.com
http_access allow manager localhost
http_access allow manager icp_allowed
http_access deny manager
http_access allow icp_allowed
http_access allow SQUIDSPECIAL
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access allow localnet_appproxy
http_access deny !localnet_auth
http_access allow localnet_guest sblYBOveride
http_access deny localnet_guest sblMal
http_access deny localnet_guest sblPorn
http_access allow localnet_guest
http_access allow nonAuthSrc
http_access allow nonAuthDom
http_access allow sblYBOveride FTP
http_access allow sblYBOveride AuthorizedUsers
http_access deny sblMal
http_access deny sblPorn
http_access allow FTP
http_access allow AuthorizedUsers
http_access deny all
http_port 3128
http_port 8080
cache_mem 40960 MB
cache_mgr operations.mana...@abc.com
cachemgr_passwd report33 all
cache_dir aufs /var/spool/squid 55 16 256
always_direct allow FTP
always_direct allow DMZSRV
always_direct allow DirectExceptions
never_direct deny notwindowsupdate_url
never_direct deny MsUpdateAllowed windowsupdate_url
never_direct allow !DMZSRV windowsupdate_url
ftp_passive off
miss_access allow notwindowsupdate_url
miss_access allow MsUpdateAllowed windowsupdate_url
miss_access deny !DMZSRV windowsupdate_url
coredump_dir /var/spool/squid
range_offset_limit 1200 MB
maximum_object_size 1200 MB
quick_abort_min -1
refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?] 4320
80% 129600 reload-into-ims
refresh_pattern -i

Re: [squid-users] MS update woes

2016-01-25 Thread Alex Samad
esc 8192
delay_pools 1
delay_class 1 1
delay_parameters 1 1310720/2621440
acl Delay_Domain dstdomain -i "/etc/squid/lists/delayDom.lst"
delay_access 1 deny DMZSRV
delay_access 1 allow Delay_Domain

"

On 25 January 2016 at 12:09, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 25/01/2016 11:20 a.m., Alex Samad wrote:
>> Hi
>>
>> Seems like I getting a bit confused in my conf now .. with
>> never_direct, always_direct. and miss_access
>>
>
> never_direct and always_direct determine whether cache_peer are required
> or allowed to be used on that connection respectively. You dont have
> cache_peer so only never_direct will have an effect via preventing any
> server connections from Squid.
>
> miss_access determines whether Squid is allowed to service a MISS
> transaction.
>
> In your setup never_direct and miss_access are roughly the same end
> result. But Squid does a lot more work in the never_direct case.
>
>
>>
>> # ##
>> # acl
>> # ##
>> acl sblMal dstdomain -i "/etc/squid/lists/squid-malicious.acl"
>> acl sblPorn dstdomain -i "/etc/squid/lists/squid-porn.acl"
>> acl localnet src 10.32.80.0/24
>> acl localnet_auth src 10.32.0.0/14
>> acl localnet_auth src 10.172.0.0/16
>> acl localnet_auth src 10.43.200.51/32
>> acl localnet_guest src 10.172.202.0/24
>> acl localnet_appproxy src 10.172.203.30/32
>> acl sblYBOveride dstdomain -i "/etc/squid/lists/yb-nonsquidblacklist.acl"
>> acl nonAuthDom dstdomain -i "/etc/squid/lists/nonAuthDom.lst"
>> acl nonAuthSrc src "/etc/squid/lists/nonAuthServer.lst"
>> acl FTP proto FTP
>> acl DMZSRV src 10.32.20.110
>> acl DMZSRV src 10.32.20.111
>> acl DirectExceptions url_regex -i
>> ^http://(www.|)smh.com.au/business/markets-live/.*
>> acl SSL_ports port 443
>> acl Safe_ports port 80  # http
>> acl Safe_ports port 21  # ftp
>> acl Safe_ports port 443 # https
>> acl CONNECT method CONNECT
>> acl SQUIDSPECIAL urlpath_regex ^/squid-internal-static/
>> acl AuthorizedUsers proxy_auth REQUIRED
>> acl icp_allowed src 10.32.20.110/32
>> acl icp_allowed src 10.32.20.111/32
>> acl icp_allowed src 10.172.203.30/32
>> acl icp_allowed src 10.172.203.34/32
>> acl windowsupdate_url url_regex -i
>> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]
>> acl windowsupdate_url url_regex -i
>> windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]
>> acl windowsupdate_url url_regex -i
>> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]
>> acl notwindowsupdate_url dstdomain ctldl.windowsupdate.com
>> acl nonCacheDom dstdomain -i "/etc/squid/lists/nonCacheDom.lst"
>> acl nonCacheURL urlpath_regex /x86_64/repodata/repomd.xml$
>> acl Delay_Domain dstdomain -i "/etc/squid/lists/delayDom.lst"
>>
>>
>>
>> ##http_access
>> ## presume this is processed first
>>
>> # manager access
>> http_access allow manager localhost
>> http_access allow manager icp_allowed
>> http_access deny manager
>>
>> # icp access
>> http_access allow icp_allowed
>>
>> # the squid special url
>> http_access allow SQUIDSPECIAL
>> # block non safe ports
>> http_access deny !Safe_ports
>> # block ssl non non ssl  ports
>> http_access deny CONNECT !SSL_ports
>>
>> #http_access deny to_localhost
>>
>> # Who can access
>> # network with no auth
>> http_access allow localnet
>> # local machine
>> http_access allow localhost
>> # other downstreams
>> http_access allow localnet_appproxy
>>
>> # this is my just in case MS update goes wild again turn this on ACL
>> #http_access deny !DMZSRV windowsupdate_url
>>
>
> That should be above the "allow localnet" line
> ... and maybe also above "allow icp_allowed" line.
>
>
>> # the catch all for ip address range
>> http_access deny !localnet_auth
>>
>> # special guest network rules (basically non auth)
>> http_access allow localnet_guest sblYBOveride
>> http_access deny localnet_guest sblMal
>> http_access deny localnet_guest sblPorn
>> http_access allow localnet_guest
>>
>> # non guest sources that can access via non auth
>> http_access allow nonAuthSrc
>> # non auth dest domains
>> http_access allow nonAuthDom
>>
>> # over ride some black list sites
>> http_access allow sblYBOveride FTP
>> http_access allow sblYBOveride AuthorizedUsers
>>
>> # squid blacklists
>

Re: [squid-users] MS update woes

2016-01-24 Thread Alex Samad
20% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

# NON Cache Domain
acl nonCacheDom dstdomain -i "/etc/squid/lists/nonCacheDom.lst"
cache deny nonCacheDom

# NON Cache URL
acl nonCacheURL urlpath_regex /x86_64/repodata/repomd.xml$
cache deny nonCacheURL



So what I have hoped to have done here is
1) stop all except DMZSRV hosts from access the Microsoft Update urls,
unless its cached ...
2) allowed DMZSRV hosts to request those files and place them in the cache.


I had thought I had done that before, but i noticed this morning a
spike as machine where turned on and they started to make request


These are lines before I added the miss_access config. I had though
the never direct would have stopped these !
I had to turn on the explicit
#http_access deny !DMZSRV windowsupdate_url


# ##
1453672641.992 28 10.172.202.102 TCP_MISS/206 1819330 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672652.908   9943 10.172.202.102 TCP_MISS/206 3639200 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672661.916   8973 10.172.202.102 TCP_MISS/206 1686624 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672662.026 20 10.172.202.102 TCP_MISS/206 1160541 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672664.922   1918 10.172.202.102 TCP_MISS/206 3119331 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672697.955  32927 10.172.202.102 TCP_MISS/206 1697038 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672698.245 16 10.172.202.102 TCP_MISS/206 1140456 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672699.359130 10.172.202.102 TCP_MISS/206 3424893 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
1453672700.269 38 10.172.202.102 TCP_MISS/206 2338346 GET
http://wsus.ds.download.windowsupdate.com/c/msdownload/update/software/secu/2015/12/ie11-windows6.1-kb3124275-x86_da23592568a57c26665a23d23d888428d831d739.psf
- HIER_NONE/- application/octet-stream
# ##



any comments welcome

Thanks


On 20 January 2016 at 14:27, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 20/01/2016 1:56 p.m., Alex Samad wrote:
>> Oh
>>
>> I am missing something. your saying the actualy get include more past
>> the ? and that squid logging isn't recording it !
>
> Yes. There is part of the URL that is not logged by default. Sometimes
> that part is very big by many KB, and/or wrongly containing sensitive info.
> Set <http://www.squid-cache.org/Doc/config/strip_query_terms/> to
> show/hide that part.
>
>>
>> So what I really need to do is modify the original to exclude any urls
>> that have ?
>>
>> something like ?
>> "windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]"
>>
>
> What I suspect is that some part of the hidden query-string is different
> between the MISS and possibly between your prefetch request.
>
> You may be able to use the Store-ID feature to compact duplicates if the
> changing part is unimportant. But that would have to be done very
> carefully as there are some nasty side effects worse than bandwidth
> usage if it goes wrong.
>  So leave off trying for a fix until you/we are clear on what exactly
> the reason for the MISS is.
>
> Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS update woes

2016-01-19 Thread Alex Samad
Oh

I am missing something. your saying the actualy get include more past
the ? and that squid logging isn't recording it !

So what I really need to do is modify the original to exclude any urls
that have ?

something like ?
"windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)[^?]"




On 19 January 2016 at 17:15, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 19/01/2016 7:11 p.m., Alex Samad wrote:
>> Hi
>>
>> Think I answered my own on this
>> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
>>
>>
>> Does the last refresh_pattern config win ?
>>
>
> No, this one does:
>   "windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)"
>
>
> The problem is probably in the query-string portion of the URL which is
> omitted from your log entries. If there is even a single character
> difference they are not the same cache object.
>
> Amos
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] delay pools

2016-01-18 Thread Alex Samad
Hi

Is it possible to implement delay pools such that

if file is less than 10M
then
  allow 60Mb/s
else
  allow 20Mb/s
fi


is that possible the aim is to allow a higher through put for smaller
files, but to limit bigger / longer connections

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS update woes

2016-01-18 Thread Alex Samad
On 19 January 2016 at 16:59, Amos Jeffries  wrote:
>
> Hmm. Are you using the exact same HTTP headers as WU tools on the other
> machines do to prefetch the URL into the cache ?

I have a script that checks the squid logs and then does a download of
the files through the cache -- for now

>
>>
>> So I was thinking is there a way in the acl to allow some machine to
>> access the url's but only if there are cached !
>> and others to pull them down from the internet ??
>
>
> miss_access directive does that.


I actually used never_direct and used the same url selection.

Ill have a look at miss_access
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS update woes

2016-01-18 Thread Alex Samad
On 19 January 2016 at 16:59, Amos Jeffries  wrote:
>> refresh_pattern -i
>> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
>> 129600 reload-into-ims
>> refresh_pattern -i
>> windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
>> 80% 129600 reload-into-ims
>> refresh_pattern -i
>> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
>> 129600 reload-into-ims
>>
>> # Add any of your own refresh_pattern entries above these.
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
>> refresh_pattern .   0   20% 4320
>>
>>

Any idea why

7 k.abc.com TCP_MISS/200 6913 GET
http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?
- HIER_DIRECT/14.200.100.27 application/octet-stream

3 g.abc.com TCP_MISS/200 7780 GET
http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/pinrulesstl.cab?
- HIER_DIRECT/14.200.100.27 application/octet-stream


5 a.abc.com TCP_MISS/200 6913 GET
http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?
- HIER_DIRECT/14.200.100.26 application/octet-stream


these are not being cached ??? I though the above config forced it to
be cached ?

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS update woes

2016-01-18 Thread Alex Samad
Hi

Think I answered my own on this
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0


Does the last refresh_pattern config win ?

On 19 January 2016 at 17:08, Alex Samad <a...@samad.com.au> wrote:
> On 19 January 2016 at 16:59, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>> refresh_pattern -i
>>> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
>>> 129600 reload-into-ims
>>> refresh_pattern -i
>>> windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
>>> 80% 129600 reload-into-ims
>>> refresh_pattern -i
>>> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
>>> 129600 reload-into-ims
>>>
>>> # Add any of your own refresh_pattern entries above these.
>>> refresh_pattern ^ftp:   144020% 10080
>>> refresh_pattern ^gopher:14400%  1440
>>> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
>>> refresh_pattern .   0   20% 4320
>>>
>>>
>
> Any idea why
>
> 7 k.abc.com TCP_MISS/200 6913 GET
> http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?
> - HIER_DIRECT/14.200.100.27 application/octet-stream
>
> 3 g.abc.com TCP_MISS/200 7780 GET
> http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/pinrulesstl.cab?
> - HIER_DIRECT/14.200.100.27 application/octet-stream
>
>
> 5 a.abc.com TCP_MISS/200 6913 GET
> http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?
> - HIER_DIRECT/14.200.100.26 application/octet-stream
>
>
> these are not being cached ??? I though the above config forced it to
> be cached ?
>
> A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] MS update woes

2016-01-17 Thread Alex Samad
Hi

so I have this in place now

This works well for delaying YAY

#
# Delay Pools
# http://wiki.squid-cache.org/Features/DelayPools
# 
http://www.serverwatch.com/tutorials/article.php/3357241/Reining-in-Bandwidth-With-Squid-Proxying.htm
delay_pools 1
delay_class 1 1

# 10Mb/s fille rate , 20Mb/s reserve
# 10485760/8 = 1310720
# 20971520/8 = 2621440
delay_parameters 1 1310720/2621440

# What to delay
acl Delay_ALL src all
acl Delay_Domain dstdomain -i "/etc/squid/lists/delayDom.lst"

delay_access 1 deny DMZSRV
delay_access 1 allow Delay_Domain




But this doesn't seem to be working



# 
#  MS Windows UpDate ACL's
# 
acl windowsupdate_url url_regex -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
acl windowsupdate_url url_regex -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
acl windowsupdate_url url_regex -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)


# http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
# 800M for MS SQL patch file
range_offset_limit 800 MB
maximum_object_size 800 MB

range_offset_limit 800 MB windowsupdate_url
maximum_object_size 800 MB windowsupdate_url

# http://www.squid-cache.org/Versions/v3/3.5/cfgman/quick_abort_min.html
# If you want retrievals to always continue if they are being
#   cached set 'quick_abort_min' to '-1 KB'.
quick_abort_min -1

## range_offset_list is set to just MS URL
## set quick abort back to normal
#quick_abort_min 16 KB
#quick_abort_max 1024 KB
#quick_abort_pct 95


# Now all that this line tells us to do is cache all .cab, .exe, .msu,
.msu, .msf, .asf, .psf, .wma,. to .zip from microsoft.com,
# and the lifetime of the object in the cache is 4320 minutes (aka 3
days) to 43200 minutes (aka 30 days).
# Each of the downloaded objects are added to the cache, and then
whenever a request arrives indicating the cache copy must not be used
#  it gets converted to an if-modified-since check instead of a new
copy reload request.

# Change to  90 days
#refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims
#refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
80% 43200 reload-into-ims
#refresh_pattern -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims
refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
129600 reload-into-ims
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
80% 129600 reload-into-ims
refresh_pattern -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
129600 reload-into-ims

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320



I have turned this on to stop all but my test machine from downloading
from there.
# 
# Blockers
# Off by default
# 
# if there is a problem with MS update uncomment this
http_access deny !DMZSRV windowsupdate_url


seems like its not caching again.


So I was thinking is there a way in the acl to allow some machine to
access the url's but only if there are cached !
and others to pull them down from the internet ??

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS Update

2016-01-11 Thread Alex Samad
Hi

On 11 January 2016 at 18:54, Amos Jeffries  wrote:
>> guessing I have to bump up the 200M max to 800mb.
>
> Maybe. But IMHO use the ACLs tat range_offset_limit can take.

your suggesting to limit the offset limit to just the windows update sites

>
>> are the other values still okay ?
>
> Yes.

so if I bump it up to 800Mb it will start to work okay again ?

so using http://wiki.squid-cache.org/SquidFaq/WindowsUpdate which i
used to get the rules
the special way to make this work is

turn off all the client pc. then do a single download of the file -
this will place all of it in the cache

then I can turn the other clients back on ..
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] MS Update

2016-01-10 Thread Alex Samad
Hi

I burnt up 172G of download in 24 hours with multi machines doing the
download of the same file (MS SQL patch)

I think I am running into the same issue


So multiple machines are trying to do the download...
Q) why don't they share the same download !

1452459804.945  64052 10.172.208.108 TCP_MISS/206 1727799 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/svpk/2015/05/sqlserver2014sp1-kb3058865-x64-enu_2c84e2ebd0d3cb4980a3a1a80d79fd7520405626.exe
- HIER_DIRECT/150.101.195.217 application/octet-stream
1452459868.272  63326 10.172.208.108 TCP_MISS/206 1312208 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/svpk/2015/05/sqlserver2014sp1-kb3058865-x64-enu_2c84e2ebd0d3cb4980a3a1a80d79fd7520405626.exe
- HIER_DIRECT/150.101.195.217 application/octet-stream
1452459933.336  65061 10.172.208.108 TCP_MISS/206 1155440 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/svpk/2015/05/sqlserver2014sp1-kb3058865-x64-enu_2c84e2ebd0d3cb4980a3a1a80d79fd7520405626.exe
- HIER_DIRECT/150.101.195.217 application/octet-stream
1452459998.406  65067 10.172.208.108 TCP_MISS/206 1022158 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/svpk/2015/05/sqlserver2014sp1-kb3058865-x64-enu_2c84e2ebd0d3cb4980a3a1a80d79fd7520405626.exe
- HIER_DIRECT/150.101.195.217 application/octet-stream
1452460066.455  68046 10.172.208.108 TCP_MISS/206 2006058 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/svpk/2015/05/sqlserver2014sp1-kb3058865-x64-enu_2c84e2ebd0d3cb4980a3a1a80d79fd7520405626.exe
- HIER_DIRECT/150.101.195.200 application/octet-stream
1452460134.536  68078 10.172.208.108 TCP_MISS/206 1575462 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/svpk/2015/05/sqlserver2014sp1-kb3058865-x64-enu_2c84e2ebd0d3cb4980a3a1a80d79fd7520405626.exe
- HIER_DIRECT/150.101.195.200 application/octet-stream
1452460204.180  69643 10.172.208.108 TCP_MISS/206 1387948 GET
http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/svpk/2015/05/sqlserver2014sp1-kb3058865-x64-enu_2c84e2ebd0d3cb4980a3a1a80d79fd7520405626.exe
- HIER_DIRECT/150.101.195.217 application/octet-stream


here you can see multiple requests for the same file .

I am presuming 206 is a partial download - is that Windows or SQUID ..
I presume windows client

So is it the byte range that gets cached.

if client a want 100 - 200 of file X
and client B wants 50 - 150.. will squid reuse whatever has been
downloaded of the 100-200 request by client B


any way I can for the requests to a single file - I could manually
download the file once, that would place it in the cache.


I have this in my config
# http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
range_offset_limit 200 MB
maximum_object_size 200 MB
quick_abort_min -1

refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
80% 43200 reload-into-ims
refresh_pattern -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims


guessing I have to bump up the 200M max to 800mb. are the other values
still okay ?


A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Question about delay pools again

2016-01-04 Thread Alex Samad
Hi

Just wanted to confirm my understanding of delay pools and the ability
to ratelimit inbound traffic.

Today one of our W10 machines did it windows update .. New patch ..
.MS SQL SP3 - 384M big patch

So it contacts our squid proxy with then downloaded it from WSUS
update ... which is geocached with out local ISP.

This then flooded our 100Mb wan port.

My understanding is that delay pools will not help me with rate
limiting that to a cap of say 10Mb/s

The only thing that Squid or Linux can do is delay ACK's and thus rate
limit that way.

Delay pools are more for SQUID -> End user ...


Thanks
Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] monitoring

2016-01-04 Thread Alex Samad
Hi

Is there a way to see what is being downloaded by whom before it has finished.

I had somebody doing a big download and I wanted to find it . only way
I could do that was by stoping squid and checking the log file.

is there another way of doing that  ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about delay pools again

2016-01-04 Thread Alex Samad
On 5 January 2016 at 12:40, Amos Jeffries  wrote:
> What the above does is not limit any particular user. But limits the
> total server bandwidth to those domains (combined) to 10Mbps. It is a
> good solution, but still has a few problems.
>
> WU will now be very slow, proportional to how many users are downloading
> the updates as MISS rather than HIT. Remembering that until each update
> object is fully fetched once it will not HIT.

Cool

So is there a better way of configuring it ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about delay pools again

2016-01-04 Thread Alex Samad
So thought I would try it out

#
# Delay Pools
# http://wiki.squid-cache.org/Features/DelayPools
# 
http://www.serverwatch.com/tutorials/article.php/3357241/Reining-in-Bandwidth-With-Squid-Proxying.htm
delay_pools 1
delay_class 1 1

# 10Mb/s fille rate , 20Mb/s reserve
# 10485760/8 = 1310720
# 20971520/8 = 2621440
delay_parameters 1 1310720/2621440

# What to delay
acl Delay_ALL src all
acl Delay_Domain dstdomain -i "/etc/squid/lists/delayDom.lst"

delay_access 1 allow Delay_Domain


/etc/squid/lists/delayDom.lst
.windowsupdate.com


and I can just add domains to the file as needed


On 5 January 2016 at 10:57, Alex Samad <a...@samad.com.au> wrote:
> Hi
>
> Just wanted to confirm my understanding of delay pools and the ability
> to ratelimit inbound traffic.
>
> Today one of our W10 machines did it windows update .. New patch ..
> .MS SQL SP3 - 384M big patch
>
> So it contacts our squid proxy with then downloaded it from WSUS
> update ... which is geocached with out local ISP.
>
> This then flooded our 100Mb wan port.
>
> My understanding is that delay pools will not help me with rate
> limiting that to a cap of say 10Mb/s
>
> The only thing that Squid or Linux can do is delay ACK's and thus rate
> limit that way.
>
> Delay pools are more for SQUID -> End user ...
>
>
> Thanks
> Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] More cache peer confusion

2016-01-04 Thread Alex Samad
from the logs

# these 2 are from my laptop to alcdmz which then talks to gsdmz1,
which responds with a 504

Jan 05 11:55:53 2016.808  0 alcdmz1.abc.com TCP_HIT/504 4800 GET
http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css -
HIER_NONE/- text/html
Jan 05 11:55:55 2016.332  0 alcdmz1.abc.com
TCP_CLIENT_REFRESH_MISS/504 4642 GET
http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css -
HIER_NONE/- text/html


# this is from the gsdmz1 box with an export http://gsdmz1:3128 - seems to work
Jan 05 11:56:34 2016.282  4 gsdmz1.abc.com TCP_MEM_HIT/200 1556
GET http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css -
HIER_NONE/- text/css

# straight afterwards again from laptop via alcdmz1
Jan 05 11:56:43 2016.596  1 alcdmz1.abc.com
TCP_CLIENT_REFRESH_MISS/504 4642 GET
http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css -
HIER_NONE/- text/html



from alcdmz1
wget -d -O /dev/null
http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css
Setting --output-document (outputdocument) to /dev/null
DEBUG output created by Wget 1.12 on linux-gnu.

--2016-01-05 11:59:53--
http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css
Resolving alcdmz1... 10.32.20.111
Caching alcdmz1 => 10.32.20.111
Connecting to alcdmz1|10.32.20.111|:3128... connected.
Created socket 4.
Releasing 0x00c37880 (new refcount 1).

---request begin---
GET http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: wiki.squid-cache.org

---request end---
Proxy request sent, awaiting response...
---response begin---
HTTP/1.1 200 OK
Date: Tue, 05 Jan 2016 00:55:13 GMT
Server: Apache/2.4.10 (Debian)
Last-Modified: Mon, 04 Feb 2008 14:13:52 GMT
ETag: "453-44555bbcaa800"
Accept-Ranges: bytes
Content-Length: 1107
Vary: Accept-Encoding
Cache-Control: max-age=604800, public
Expires: Tue, 12 Jan 2016 00:55:13 GMT
Content-Type: text/css
Age: 280
X-Cache: HIT from alcdmz1
X-Cache-Lookup: HIT from alcdmz1:3128
Via: 1.1 alcdmz1 (squid)
Connection: close

---response end---
200 OK
Length: 1107 (1.1K) [text/css]
Saving to: `/dev/null'

100%[==>]
1,107   --.-K/s   in 0s

Closed fd 4
2016-01-05 11:59:53 (193 MB/s) - `/dev/null' saved [1107/1107]

looks okay but look at the logs

Jan 05 11:59:53 2016.380  0 alcdmz1.abc.com TCP_MEM_HIT/200 1559
GET http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css -
HIER_NONE/- text/css
Jan 05 12:00:59 2016.434  5 alexs-xps.abc.com TCP_MISS/504 4889
GET http://wiki.squid-cache.org/wiki/squidtheme/js/niftyCorners.css
alex.samad STANDBY_POOL/10.32.20.110 text/html

I tried a refresh from my browser ..
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error accessing the 403 page

2016-01-01 Thread Alex Samad
On 2 January 2016 at 12:23, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 2016-01-02 13:19, Alex Samad wrote:
>>
>> On 2 January 2016 at 09:22, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>>
>>> On 2016-01-01 23:28, Alex Samad wrote:
>>>>
>>>>
>>>> Hi
>>>>
>>>> I installed 3.5.12 and when I try and get to a page that is blocked. I
>>>> used to get an message page that said contact the admin person.
>>>>
>>>> trying to get to
>>>> http://bcp.crwdcntrl.net/squid-internal-static/icons/SN.png
>>>>
>>>>
>>>> This is part of the error generated
>>>> The following error was encountered while trying to retrieve the URL:
>>>> http://alcdmz1:3128/squid-internal-static/icons/SN.png
>>>>
>>>> alcdmz1 is the proxy server
>>>>
>>>> I seemed to have blocked access to all error messages. not sure how as
>>>> I haven't made any changes except upgrading to .12 from .11
>>>
>>>
>>>
>>> We fixed the Host header output on CONNECT requests to cache_peer between
>>> those versions. That is likely the reason it has started being visible.
>>
>>
>> Sorry not sure how that is related to this.
>
>
> It is the only Squid change between those versions that seems related to the
> issue.
>
>

okay

>>
>>>
>>> The above URL is just an icon being served up by your Squid as part of
>>> the
>>> page display. The main error page text should have been sent as the body
>>> of
>>> the original 403 message itself.
>>>
>>
>> agree
>>
>>> Your http_access rules are the things rejecting it. Note that it contains
>>> the squid listening domain:port (alcdmz1:3128 or bcp.crwdcntrl.net:80)
>>> which
>>> your proxy machine is configured to announce publicly as its contain
>>> domain
>>> / FQDN.
>>>
>>
>> The original url was bcp.crwdcntrl.net:80, the page I got back
>> included the text
>> http://alcdmz1:3128/squid-internal-static/icons/SN.png
>>
>>
>>> The squid service needs to be publicly accessible at that domain:port
>>> that
>>> it is advertising as its public FQDN for this icon request to succeed.
>>> That
>>> means making the server hostname, or visible_hostname something that
>>> clients
>>> can access directly - and unique_hostname the private internal name the
>>> Squid instance uses to distinguish itself from other peers on the proxy
>>> farm.
>>
>>
>> so they can connect to alcdmz1:3128
>>
>>
>>
>> conf
>> auth_param negotiate program /usr/bin/ntlm_auth
>> --helper-protocol=gss-spnego --configfile /etc/samba/smb.conf-squid
>> auth_param negotiate children 20 startup=0 idle=3
>> auth_param negotiate keep_alive on
>> auth_param ntlm program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-ntlmssp --configfile
>> /etc/samba/smb.conf-squid
>> auth_param ntlm children 20 startup=0 idle=3
>> auth_param ntlm keep_alive on
>> auth_param basic program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic --configfile
>> /etc/samba/smb.conf-squid
>> auth_param basic children 5
>> auth_param basic realm Squid proxy-caching web server
>> auth_param basic credentialsttl 2 hours
>> acl sblMal dstdomain -i "/etc/squid/lists/squid-malicious.acl"
>> acl sblPorn dstdomain -i "/etc/squid/lists/squid-porn.acl"
>> acl localnet src 10.3.8.0/24
>> acl localnet_auth src 10.1.0.0/14
>> acl localnet_auth src 10.2.0.0/16
>> acl localnet_auth src 10.2.2.1/32
>
>
> NP: 10.1.0.0/14 contains and matches all of 10.2.*.*, therefore the other
> localnet_auth entries are all redundant and can be removed.
>
> (squid -k parse should be warning you about that)
>
>
>> acl localnet_guest src 10.1.22.0/24
>> acl localnet_appproxy src 10.172.23.3/32
>
>
> NP: localnet and localnet_appproxy are both of the same type and both only
> used to allow http_access within the same block of allows.
>
> You should simplify by adding 10.172.23.3 to the localnet definition and
> drop localnet_appproxy entirely.

I have change some of the ip addressing for the email

>
>> acl sblYBOveride dstdomain -i "/etc/squid/lists/yb-nonsquidblacklist.acl"
>> acl nonAuthDom dstdomain -i "/etc/squid/lists/nonAuthDom.lst"
>> acl nonAuthSrc src "/etc/squid/lists/nonAuthServer.lst"
>> acl FTP p

Re: [squid-users] Error accessing the 403 page

2016-01-01 Thread Alex Samad
On 2 January 2016 at 09:22, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 2016-01-01 23:28, Alex Samad wrote:
>>
>> Hi
>>
>> I installed 3.5.12 and when I try and get to a page that is blocked. I
>> used to get an message page that said contact the admin person.
>>
>> trying to get to
>> http://bcp.crwdcntrl.net/squid-internal-static/icons/SN.png
>>
>>
>> This is part of the error generated
>> The following error was encountered while trying to retrieve the URL:
>> http://alcdmz1:3128/squid-internal-static/icons/SN.png
>>
>> alcdmz1 is the proxy server
>>
>> I seemed to have blocked access to all error messages. not sure how as
>> I haven't made any changes except upgrading to .12 from .11
>
>
> We fixed the Host header output on CONNECT requests to cache_peer between
> those versions. That is likely the reason it has started being visible.

Sorry not sure how that is related to this.

>
> The above URL is just an icon being served up by your Squid as part of the
> page display. The main error page text should have been sent as the body of
> the original 403 message itself.
>

agree

> Your http_access rules are the things rejecting it. Note that it contains
> the squid listening domain:port (alcdmz1:3128 or bcp.crwdcntrl.net:80) which
> your proxy machine is configured to announce publicly as its contain domain
> / FQDN.
>

The original url was bcp.crwdcntrl.net:80, the page I got back
included the text
http://alcdmz1:3128/squid-internal-static/icons/SN.png


> The squid service needs to be publicly accessible at that domain:port that
> it is advertising as its public FQDN for this icon request to succeed. That
> means making the server hostname, or visible_hostname something that clients
> can access directly - and unique_hostname the private internal name the
> Squid instance uses to distinguish itself from other peers on the proxy
> farm.

so they can connect to alcdmz1:3128



conf
auth_param negotiate program /usr/bin/ntlm_auth
--helper-protocol=gss-spnego --configfile /etc/samba/smb.conf-squid
auth_param negotiate children 20 startup=0 idle=3
auth_param negotiate keep_alive on
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp --configfile
/etc/samba/smb.conf-squid
auth_param ntlm children 20 startup=0 idle=3
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic --configfile
/etc/samba/smb.conf-squid
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
acl sblMal dstdomain -i "/etc/squid/lists/squid-malicious.acl"
acl sblPorn dstdomain -i "/etc/squid/lists/squid-porn.acl"
acl localnet src 10.3.8.0/24
acl localnet_auth src 10.1.0.0/14
acl localnet_auth src 10.2.0.0/16
acl localnet_auth src 10.2.2.1/32
acl localnet_guest src 10.1.22.0/24
acl localnet_appproxy src 10.172.23.3/32
acl sblYBOveride dstdomain -i "/etc/squid/lists/yb-nonsquidblacklist.acl"
acl nonAuthDom dstdomain -i "/etc/squid/lists/nonAuthDom.lst"
acl nonAuthSrc src "/etc/squid/lists/nonAuthServer.lst"
acl FTP proto FTP
acl DMZSRV src 10.3.2.110
acl DMZSRV src 10.3.2.111
always_direct allow FTP
always_direct allow DMZSRV
ftp_passive off
ftp_epsv_all off
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
acl AuthorizedUsers proxy_auth REQUIRED
acl icp_allowed src 10.3.2.110/32
acl icp_allowed src 10.3.2.111/32
acl icp_allowed src 10.172.23.0/32
acl icp_allowed src 10.172.23.4/32
http_access allow manager localhost
http_access allow manager icp_allowed
http_access deny manager
http_access allow icp_allowed
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access allow localnet_appproxy
http_access deny !localnet_auth
http_access allow localnet_guest sblYBOveride
http_access deny localnet_guest sblMal
http_access deny localnet_guest sblPorn
http_access allow localnet_guest
http_access allow nonAuthSrc
http_access allow nonAuthDom
http_access allow sblYBOveride FTP
http_access allow sblYBOveride AuthorizedUsers
http_access deny sblMal
http_access deny sblPorn
http_access allow FTP
http_access allow AuthorizedUsers
http_access deny all
http_port 3128
http_port 8080
cache_mem 40960 MB
cache_mgr operations.mana...@abc.com
cache_dir aufs /var/spool/squid 55 16 256
coredump_dir /var/spool/squid
range_offset_limit 200 MB
maximum_object_size 200 MB
quick_abort_min -1
refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
80% 43200 relo

[squid-users] Error accessing the 403 page

2016-01-01 Thread Alex Samad
Hi

I installed 3.5.12 and when I try and get to a page that is blocked. I
used to get an message page that said contact the admin person.

trying to get to
http://bcp.crwdcntrl.net/squid-internal-static/icons/SN.png


This is part of the error generated
The following error was encountered while trying to retrieve the URL:
http://alcdmz1:3128/squid-internal-static/icons/SN.png

alcdmz1 is the proxy server

I seemed to have blocked access to all error messages. not sure how as
I haven't made any changes except upgrading to .12 from .11
A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy and client certs

2015-12-30 Thread Alex Samad
Hi

Thanks I thought that might be the issue.

could you point me to an example for requesting client certs for a directory

Thanks
Alex

On 30 December 2015 at 21:56, Matus UHLAR - fantomas <uh...@fantomas.sk> wrote:
> On 30.12.15 15:11, Alex Samad wrote:
>>
>> I have squid 3.5.12 working as a reverse proxy
>>
>> cache_peer 127.0.0.1 \
>> parent 443 0 proxy-only no-query no-digest originserver \
>> login=PASS \
>> ssl \
>> sslcafile=/etc/pki/tls/certs/ca-bundle.crt \
>> sslflags=DONT_VERIFY_PEER \
>> name=webServer
>>
>> This points to httpd which has a
>>
>>DirectoryIndex index.shtml index.html
>>Options -Indexes -Includes +IncludesNOEXEC
>> -SymLinksIfOwnerMatch -ExecCGI -FollowSymLinks
>>
>>SSLOptions +StdEnvVars +ExportCertData
>>SSLVerifyClient optional_no_ca
>>SSLVerifyDepth 4
>>
>>
>> Unfortunately the request for a client cert never makes it to the client.
>>
>> How can I change this to allow client certs to work
>
>
> client certs will only work when you pass the connection directly to web
> server without unbundling SSL.
> That means, it's useless to use reverse proxy for HTTPS server when it needs
> client certificates.
>
> The workaround you could be in verifying client certificates by squid,
> pushing that info to server and webserver trusting that info...
>
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> Chernobyl was an Windows 95 beta test site.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid reverse proxy and client certs

2015-12-29 Thread Alex Samad
Hi

I have squid 3.5.12 working as a reverse proxy

cache_peer 127.0.0.1 \
 parent 443 0 proxy-only no-query no-digest originserver \
 login=PASS \
 ssl \
 sslcafile=/etc/pki/tls/certs/ca-bundle.crt \
 sslflags=DONT_VERIFY_PEER \
 name=webServer

This points to httpd which has a

DirectoryIndex index.shtml index.html
Options -Indexes -Includes +IncludesNOEXEC
-SymLinksIfOwnerMatch -ExecCGI -FollowSymLinks

SSLOptions +StdEnvVars +ExportCertData
SSLVerifyClient optional_no_ca
SSLVerifyDepth 4


Unfortunately the request for a client cert never makes it to the client.

How can I change this to allow client certs to work

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 3.5.12 is available

2015-12-28 Thread Alex Samad
Hi

Do you provide the source rpms for RHEL/Centos

A

On 28 December 2015 at 23:35, Eliezer Croitoru  wrote:
> I took the time to build and test a RPM for OpenSUSE leap 42.1 at:
> http://ngtech.co.il/repo/opensuse/leap/x86_64/squid-3.5.12-1.0.x86_64.rpm
>
> SRPM at:
> http://ngtech.co.il/repo/opensuse/leap/SRPMS/
>
> Eliezer
>
> On 29/11/2015 08:01, Amos Jeffries wrote:
>>
>> The Squid HTTP Proxy team is very pleased to announce the availability
>> of the Squid-3.5.12 release!
>>
>>
>> This release is a bug fix release resolving issues found in the prior
>> Squid releases.
>>
>>
>> The major changes to be aware of:
>
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache peer issues

2015-12-21 Thread Alex Samad
Hi

seems like .12 is now available for me. I will apply and retest. is
there anything you would like me to do if I see it again ?

A

On 21 December 2015 at 21:26, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 21/12/2015 2:00 p.m., Alex Samad wrote:
>> Hi
>>
>> running on centos 6.7
>>
>> 3.5.12 still not available on centos 6.
>>
>> rpm -qa | grep squid
>> squid-helpers-3.5.11-1.el6.x86_64
>> squid-3.5.11-1.el6.x86_64
>>
>> This is the 2 cache_peer statements I use
>>
>> # on alcdmz1
>> cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>> #cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>>
>> # on gsdmz1
>> #cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>> cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>>
>> on alcdmz1 with export http_proxy pointing to alcdmz1
>>
>> wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> -O /dev/null
>> Setting --output-document (outputdocument) to /dev/null
>> DEBUG output created by Wget 1.12 on linux-gnu.
>>
>> --2015-12-21 11:58:05--
>> http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> Resolving alcdmz1... 10.32.20.111
>> Caching alcdmz1 => 10.32.20.111
>> Connecting to alcdmz1|10.32.20.111|:3128... connected.
>> Created socket 4.
>> Releasing 0x0101d540 (new refcount 1).
>>
>> ---request begin---
>> GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
>> User-Agent: Wget/1.12 (linux-gnu)
>> Accept: */*
>> Host: fonts.gstatic.com
>>
>> ---request end---
>> Proxy request sent, awaiting response...
>> ---response begin---
>> HTTP/1.1 200 OK
>> Content-Type: font/woff2
>> Access-Control-Allow-Origin: *
>> Timing-Allow-Origin: *
>> Date: Mon, 30 Nov 2015 04:06:16 GMT
>> Expires: Tue, 29 Nov 2016 04:06:16 GMT
>> Last-Modified: Mon, 06 Oct 2014 20:40:59 GMT
>> X-Content-Type-Options: nosniff
>> Server: sffe
>> Content-Length: 25604
>> X-XSS-Protection: 1; mode=block
>> Cache-Control: public, max-age=31536000
>> Age: 1803109
>> Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
>> than 1 day old
>> X-Cache: HIT from alcdmz1
>> X-Cache-Lookup: HIT from alcdmz1:3128
>> Via: 1.1 alcdmz1 (squid)
>> Connection: close
>>
>> ---response end---
>> 200 OK
>> Length: 25604 (25K) [font/woff2]
>> Saving to: `/dev/null'
>>
>> 100%[==>]
>> 25,604  --.-K/s   in 0s
>>
>> Closed fd 4
>> 2015-12-21 11:58:05 (1.01 GB/s) - `/dev/null' saved [25604/25604]
>>
>>
>> on gsdmz1
>>
>>
>> wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> -O /dev/null
>> Setting --output-document (outputdocument) to /dev/null
>> DEBUG output created by Wget 1.12 on linux-gnu.
>>
>> --2015-12-21 11:58:59--
>> http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> Resolving gsdmz1... 10.32.20.110
>> Caching gsdmz1 => 10.32.20.110
>> Connecting to gsdmz1|10.32.20.110|:3128... connected.
>> Created socket 4.
>> Releasing 0x010a2930 (new refcount 1).
>>
>> ---request begin---
>> GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
>> User-Agent: Wget/1.12 (linux-gnu)
>> Accept: */*
>> Host: fonts.gstatic.com
>>
>> ---request end---
>> Proxy request sent, awaiting response...
>> ---response begin---
>> HTTP/1.1 504 Gateway Timeout
>> Server: squid
>> Mime-Version: 1.0
>> Date: Mon, 21 Dec 2015 00:58:59 GMT
>> Content-Type: text/html;charset=utf-8
>> Content-Length: 3964
>> X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
>> Vary: Accept-Language
>> Content-Language: en
>> Age: 1450659540
>> Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
>> than 1 day old
>> Warning: 110 squid "Response is stale"
>> Warning: 111 squid "Revalidation failed"
>> X-Cache: HIT from alcdmz1
>> X-Cache-Lookup: HIT from alcdmz1:3128
>> X-Cache: MISS from gsdmz1
>> X-Cache-Lookup: MISS from gsdmz1:3128
>> Via: 1.1 alcdmz1 (squid), 1.1 gsdmz1 (squid)
>> Connection: close
>>
>> ---response end---
>> 504 Gateway T

[squid-users] squid cache peer issues

2015-12-20 Thread Alex Samad
Hi

running on centos 6.7

3.5.12 still not available on centos 6.

rpm -qa | grep squid
squid-helpers-3.5.11-1.el6.x86_64
squid-3.5.11-1.el6.x86_64

This is the 2 cache_peer statements I use

# on alcdmz1
cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10
#cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10

# on gsdmz1
#cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10
cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10

on alcdmz1 with export http_proxy pointing to alcdmz1

wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
-O /dev/null
Setting --output-document (outputdocument) to /dev/null
DEBUG output created by Wget 1.12 on linux-gnu.

--2015-12-21 11:58:05--
http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
Resolving alcdmz1... 10.32.20.111
Caching alcdmz1 => 10.32.20.111
Connecting to alcdmz1|10.32.20.111|:3128... connected.
Created socket 4.
Releasing 0x0101d540 (new refcount 1).

---request begin---
GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: fonts.gstatic.com

---request end---
Proxy request sent, awaiting response...
---response begin---
HTTP/1.1 200 OK
Content-Type: font/woff2
Access-Control-Allow-Origin: *
Timing-Allow-Origin: *
Date: Mon, 30 Nov 2015 04:06:16 GMT
Expires: Tue, 29 Nov 2016 04:06:16 GMT
Last-Modified: Mon, 06 Oct 2014 20:40:59 GMT
X-Content-Type-Options: nosniff
Server: sffe
Content-Length: 25604
X-XSS-Protection: 1; mode=block
Cache-Control: public, max-age=31536000
Age: 1803109
Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
than 1 day old
X-Cache: HIT from alcdmz1
X-Cache-Lookup: HIT from alcdmz1:3128
Via: 1.1 alcdmz1 (squid)
Connection: close

---response end---
200 OK
Length: 25604 (25K) [font/woff2]
Saving to: `/dev/null'

100%[==>]
25,604  --.-K/s   in 0s

Closed fd 4
2015-12-21 11:58:05 (1.01 GB/s) - `/dev/null' saved [25604/25604]


on gsdmz1


wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
-O /dev/null
Setting --output-document (outputdocument) to /dev/null
DEBUG output created by Wget 1.12 on linux-gnu.

--2015-12-21 11:58:59--
http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
Resolving gsdmz1... 10.32.20.110
Caching gsdmz1 => 10.32.20.110
Connecting to gsdmz1|10.32.20.110|:3128... connected.
Created socket 4.
Releasing 0x010a2930 (new refcount 1).

---request begin---
GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: fonts.gstatic.com

---request end---
Proxy request sent, awaiting response...
---response begin---
HTTP/1.1 504 Gateway Timeout
Server: squid
Mime-Version: 1.0
Date: Mon, 21 Dec 2015 00:58:59 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3964
X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
Vary: Accept-Language
Content-Language: en
Age: 1450659540
Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
than 1 day old
Warning: 110 squid "Response is stale"
Warning: 111 squid "Revalidation failed"
X-Cache: HIT from alcdmz1
X-Cache-Lookup: HIT from alcdmz1:3128
X-Cache: MISS from gsdmz1
X-Cache-Lookup: MISS from gsdmz1:3128
Via: 1.1 alcdmz1 (squid), 1.1 gsdmz1 (squid)
Connection: close

---response end---
504 Gateway Timeout
Closed fd 4
2015-12-21 11:58:59 ERROR 504: Gateway Timeout.


so why does it work from alc and not from gs ???

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy setup

2015-12-11 Thread Alex Samad
Hi

I'm thinking it is outlook not being able to talk tls1.1 and/or tls
1.2 to squid. I am in the process of patching up my test box.

By ignoring that, I mean the reason its there is that outlook tried to
talk tls1 to it whilst I had tls1 turned off

A

On 11 December 2015 at 15:50, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 11/12/2015 4:52 p.m., Alex Samad wrote:
>> Hi
>>
>>
>> Is there any way to remove these from the log
>>
>> kid1| Error negotiating SSL connection on FD 38: error:140760FC:SSL
>> routines:SSL23_GET_CLIENT_HELLO:unknown protocol (1/-1)
>>
>> this is the corrosponding squid config
>> options=NO_SSLv2:NO_SSLv3:NO_TLSv1:SINGLE_DH_USE:CIPHER_SERVER_PREFERENCE
>>
>> Not I don't get this when I re enable tlsv1..
>
> Strange. Usually that means non-TLS traffic being passed to the HTTPS
> port. For example, clients opening plain-text HTTP connections to it.
>
>>
>> I am presuming I can ignore these.
>
> That is always up to you. In this case somebody is getting broken
> traffic, and your logs are filling with the messages saying so.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-10 Thread Alex Samad
Thanxs everyone i will try the changes and try with the debug options

Tls1 might be an issue. Might have to look at the ssl offloading config  so
squid  to exchange can be http instead of ssl

Eliezer hopefuly you'll do a centos 6. Any chance you can let me have a non
released .12  save me trying to build one.
A
On 11/12/2015 4:32 AM, "Eliezer Croitoru" <elie...@ngtech.co.il> wrote:

> On 09/12/2015 12:49, Alex Samad wrote:
>
>> Hi
>>
>> Can't seem to find  3.5.12 for centos pre compiled at
>> http://www1.ngtech.co.il/repo/centos/6/x86_64/
>>
> Since it's in testing
> I have built and tested for CentOS 7 but yet to publish them.
> It will take a week or more.
>
> Eliezer
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-10 Thread Alex Samad
Hi

Answer my own question
http://www.squid-cache.org/Versions/v3/3.5/cfgman/http_port.html

seems like there is a no-vhost, I presume vhost turns it on


On 11 December 2015 at 09:23, Alex Samad <a...@samad.com.au> wrote:
> Hi
>
>
> On 10 December 2015 at 23:44, dweimer <dwei...@dweimer.net> wrote:
>> https_port 10.50.20.12:443 accel defaultsite=mail.mydomain.com \
>>  cert=/certs/wildcard.certificate.crt \
>>  key=/certs/wildcard.certificate.key \
>>  options=NO_SSLv2:NO_SSLv3:NO_TLSv1:SINGLE_DH_USE:CIPHER_SERVER_PREFERENCE \
>>  dhparams=/usr/local/etc/squid/dh.param \
>>  cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:+HIGH:+MEDIUM:!SSLv2:!RC4 \
>>  vhost
>
> what is the vhost option can't find it on the doco page
> http://www.squid-cache.org/Versions/v3/3.5/cfgman/https_port.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-10 Thread Alex Samad
Hi


On 10 December 2015 at 23:44, dweimer  wrote:
> https_port 10.50.20.12:443 accel defaultsite=mail.mydomain.com \
>  cert=/certs/wildcard.certificate.crt \
>  key=/certs/wildcard.certificate.key \
>  options=NO_SSLv2:NO_SSLv3:NO_TLSv1:SINGLE_DH_USE:CIPHER_SERVER_PREFERENCE \
>  dhparams=/usr/local/etc/squid/dh.param \
>  cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:+HIGH:+MEDIUM:!SSLv2:!RC4 \
>  vhost

what is the vhost option can't find it on the doco page
http://www.squid-cache.org/Versions/v3/3.5/cfgman/https_port.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-10 Thread Alex Samad
Hi

So I have taken this config done some slight customization for my site
and it appears to be working

Thanks for this ..

On 10 December 2015 at 23:44, dweimer <dwei...@dweimer.net> wrote:
> On 2015-12-09 11:29 pm, Alex Samad wrote:
>>
>> Hi
>>
>> config
>> https_port 22.4.2.5:443 accel
>> cert=/etc/httpd/conf.d/office.abc.com.crt
>> key=/etc/httpd/conf.d/office.abc.com.key defaultsite=office.abc.com
>> options=NO_SSLv2,NO_SSLv3
>> dhparams=/etc/squid/squid-office-dhparams.pem
>>
>> cipher=ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
>> cache_peer 127.0.0.1 parent 443 0 proxy-only no-query no-digest
>> originserver login=PASS ssl sslflags=DONT_VERIFY_PEER
>> sslcert=/etc/httpd/conf.d/office.abc.com.crt
>> sslkey=/etc/httpd/conf.d/office.abc.com.key name=webServer
>> cache_peer 10.32.69.11 parent 443 0 proxy-only no-query no-digest
>> originserver login=PASS front-end-https=on ssl
>> sslflags=DONT_VERIFY_PEER sslcert=/etc/httpd/conf.d/office.abc.com.crt
>> sslkey=/etc/httpd/conf.d/office.abc.com.key name=exchangeServer
>> acl exch_domain dstdomain office.abc.com
>> acl exch_path urlpath_regex -i /exch(ange|web)
>> acl exch_path urlpath_regex -i /public
>> acl exch_path urlpath_regex -i /owa
>> acl exch_path urlpath_regex -i /ecp
>> acl exch_path urlpath_regex -i /microsoft-server-activesync
>> acl exch_path urlpath_regex -i /rpc
>> acl exch_path urlpath_regex -i /rpcwithcert
>> acl exch_path urlpath_regex -i /exadmin
>> acl exch_path urlpath_regex -i /ews
>> acl exch_path urlpath_regex -i /oab
>> acl exch_path urlpath_regex -i /autodiscover
>> cache_peer_access exchangeServer allow exch_domain exch_path
>> cache_peer_access webServer deny exch_domain exch_path
>> never_direct allow exch_domain exch_path
>> cache_mem 32 MB
>> maximum_object_size_in_memory 128 KB
>> access_log stdio:/var/log/squid/office-access.log squid
>> cache_log /var/log/squid/office-cache.log
>> cache_store_log stdio:/var/log/squid/office-cache_store.log
>> pid_filename /var/run/squid-office.pid
>> visible_hostname office.abc.com
>> deny_info TCP_RESET all
>> http_access allow all
>> miss_access allow all
>> icp_port 0
>> snmp_port 0
>>
>>
>>
>> cache.log
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Process ID 5631
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Process Roles: worker
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| With 1024 file descriptors
>> available
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Initializing IP Cache...
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| DNS Socket created at 0.0.0.0,
>> FD 6
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Adding domain
>> yieldbroker.com from /etc/resolv.conf
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Adding nameserver
>> 10.32.20.100 from /etc/resolv.conf
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Adding nameserver
>> 10.32.20.102 from /etc/resolv.conf
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Logfile: opening log
>> stdio:/var/log/squid/office-access.log
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Local cache digest enabled;
>> rebuild/rewrite every 3600/3600 sec
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Logfile: opening log
>> stdio:/var/log/squid/office-cache_store.log
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Swap maxSize 0 + 32768 KB,
>> estimated 2520 objects
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Target number of buckets: 126
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Using 8192 Store buckets
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Max Mem  size: 32768 KB
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Max Swap size: 0 KB
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Using Least Load store dir
>> selection
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Current Directory is /etc/squid
>> Jan 01 10:33:35 1970/12/10 16:15:42 kid1| Finished loading MIME types and
>> icons.
>> Jan 01 10:33:35 1970/12/10 16:15:42 

[squid-users] reverse proxy setup

2015-12-10 Thread Alex Samad
Hi


Is there any way to remove these from the log

kid1| Error negotiating SSL connection on FD 38: error:140760FC:SSL
routines:SSL23_GET_CLIENT_HELLO:unknown protocol (1/-1)

this is the corrosponding squid config
options=NO_SSLv2:NO_SSLv3:NO_TLSv1:SINGLE_DH_USE:CIPHER_SERVER_PREFERENCE

Not I don't get this when I re enable tlsv1..

I am presuming I can ignore these.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-10 Thread Alex Samad
Hi

I did the change over today.
Tested with Window 7 + exchange 2010 and it wouldn't connect whilst
there was no tls1 !

interesting IE worked against the web site  so ..

Did you come across this issues ?


On 11 December 2015 at 11:09, dweimer <dwei...@dweimer.net> wrote:
> On 2015-12-10 4:24 pm, Alex Samad wrote:
>>
>> Hi
>>
>> Answer my own question
>> http://www.squid-cache.org/Versions/v3/3.5/cfgman/http_port.html
>>
>> seems like there is a no-vhost, I presume vhost turns it on
>>
>>
>> On 11 December 2015 at 09:23, Alex Samad <a...@samad.com.au> wrote:
>>>
>>> Hi
>>>
>>>
>>> On 10 December 2015 at 23:44, dweimer <dwei...@dweimer.net> wrote:
>>>>
>>>> https_port 10.50.20.12:443 accel defaultsite=mail.mydomain.com \
>>>>  cert=/certs/wildcard.certificate.crt \
>>>>  key=/certs/wildcard.certificate.key \
>>>>
>>>> options=NO_SSLv2:NO_SSLv3:NO_TLSv1:SINGLE_DH_USE:CIPHER_SERVER_PREFERENCE \
>>>>  dhparams=/usr/local/etc/squid/dh.param \
>>>>  cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:+HIGH:+MEDIUM:!SSLv2:!RC4 \
>>>>  vhost
>>>
>>>
>>> what is the vhost option can't find it on the doco page
>>> http://www.squid-cache.org/Versions/v3/3.5/cfgman/https_port.html
>
>
> It maybe on by default now, unless you are doing multiple host names, its
> not necessary. The setup on mine is using a wildcard certificate and is
> proxying multiple domains names.
>
>
> --
> Thanks,
>Dean E. Weimer
>http://www.dweimer.net/
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-09 Thread Alex Samad
Hi

Can't seem to find  3.5.12 for centos pre compiled at
http://www1.ngtech.co.il/repo/centos/6/x86_64/


On 8 December 2015 at 19:34, Amos Jeffries  wrote:
> * try an upgrade to 3.5.12. There were some regressions in the .10/.11
> releases that can lead to really weird behaviour.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-09 Thread Alex Samad
35992070307CD15EE743F71344E1C1AE   ? ? ? ? ?/?
?/? ? ?
Dec 10 16:16:37 2015.873 RELEASE -1 
17EFD3BCAF4265B7CF7803AD0289DD7E   ? ? ? ? ?/?
?/? ? ?
Dec 10 16:16:49 2015.228 RELEASE -1 
2666EC9714425D57FDC4CD15965D350B   ? ? ? ? ?/?
?/? ? ?



access.logs
Dec 10 16:17:09 2015.706 13 192.168.56.1 TCP_MISS/200 6578 POST
https://office.abc.com/ews/exchange.asmx - FIRSTUP_PARENT/10.32.69.11
text/xml
Dec 10 16:19:36 2015.447 206818 192.168.56.1 TCP_MISS/200 16532
RPC_OUT_DATA https://office.abc.com/rpc/rpcproxy.dll? -
FIRSTUP_PARENT/10.32.69.11 application/rpc
Dec 10 16:19:36 2015.449 206862 192.168.56.1 TCP_MISS_ABORTED/502 4493
RPC_IN_DATA https://office.abc.com/rpc/rpcproxy.dll? -
FIRSTUP_PARENT/10.32.69.11 text/html
Dec 10 16:19:36 2015.453 207197 192.168.56.1 TCP_MISS_ABORTED/000 0
RPC_IN_DATA https://office.abc.com/rpc/rpcproxy.dll? -
FIRSTUP_PARENT/10.32.69.11 -
Dec 10 16:19:36 2015.453 207087 192.168.56.1 TCP_MISS_ABORTED/200
48056 RPC_OUT_DATA https://office.abc.com/rpc/rpcproxy.dll? -
FIRSTUP_PARENT/10.32.69.11 application/rpc
Dec 10 16:20:07 2015.305  24688 192.168.56.1 TCP_MISS_ABORTED/000 0
RPC_IN_DATA https://office.abc.com/rpc/rpcproxy.dll? -
FIRSTUP_PARENT/10.32.69.11 -
Dec 10 16:20:07 2015.306  24654 192.168.56.1 TCP_MISS_ABORTED/200 2004
RPC_OUT_DATA https://office.abc.com/rpc/rpcproxy.dll? -
FIRSTUP_PARENT/10.32.69.11 application/rpc


This is when I try and send an email with an attachment. An email with
no attached goes through no problem...


this config works with 3.1, not with 3.5 ..

still on .11 as I can't find centos 6 compile of .12

I think there is some issue with rpc sending or receiving ..

On 8 December 2015 at 19:34, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 8/12/2015 7:35 p.m., Alex Samad wrote:
>> Hi
>>
>> Any suggestions on how to debug this... I wouldn't mind rolling
>> forward to 3.5 again
>>
>
> Some ideas inline. The main ones are:
>
> * re-enable cache.log. It is not optional.
>
> * try an upgrade to 3.5.12. There were some regressions in the .10/.11
> releases that can lead to really weird behaviour.
>
>
>> On 2 December 2015 at 20:39, Alex Samad wrote:
>>> Just to add to this I have a lot of these in the log file
>>>
>>> TCP_MISS_ABORTED/000 0 RPC_IN_DATA
>>> TCP_MISS_ABORTED/200 4322 RPC_OUT_DATA
>>> TCP_MISS_ABORTED/000 0 RPC_IN_DATA https:
>>>
>>>
>>>
>>> On 2 December 2015 at 17:24, Alex Samad wrote:
>>>> Hi
>>>>
>>>> recently upgraded to squid-3.5.11-1.el6.x86_64 from the centos 6.7  squid 
>>>> 3.1
>>>>
>>>>
>>>> I am now having problems with people who use active sync via this
>>>> connection . seems like emails with attachments aren't making it
>>>> through .
>>>>
>>>> cache_peer 10.32.69.11 parent 443 0 proxy-only no-query no-digest
>>>> originserver login=PASS front-end-https=on ssl
>>>> sslflags=DONT_VERIFY_PEER sslcert=/etc/httpd/conf.d/office.yx.com.crt
>>>> sslkey=/etc/httpd/conf.d/office.yx.com.key name=exchangeServer
>
> You could try changing these from login=PASS to login=PASSTHRU
>
>>>>
>>>>
>>>> cache_peer 127.0.0.1 parent 443 0 proxy-only no-query no-digest
>>>> originserver login=PASS ssl sslflags=DONT_VERIFY_PEER
>>>> sslcert=/etc/httpd/conf.d/office.yx.com.crt
>>>> sslkey=/etc/httpd/conf.d/office.yx.com.key name=webServer
>>>> c
>>>>
>>>> # List of acceptable URLs to send to the Exchange server
>>>> acl exch_url url_regex -i office.yieldbroker.com/exchange
>>>> acl exch_url url_regex -i office.yieldbroker.com/exchweb
>>>> acl exch_url url_regex -i office.yieldbroker.com/public
>>>> acl exch_url url_regex -i office.yieldbroker.com/owa
>>>> acl exch_url url_regex -i office.yieldbroker.com/ecp
>>>> acl exch_url url_regex -i 
>>>> office.yieldbroker.com/microsoft-server-activesync
>>>> acl exch_url url_regex -i office.yieldbroker.com/rpc
>>>> acl exch_url url_regex -i office.yieldbroker.com/rpcwithcert
>>>> acl exch_url url_regex -i office.yieldbroker.com/exadmin
>>>> acl exch_url url_regex -i office.yieldbroker.com/oab
>>>> # added after
>>>> acl exch_url url_regex -i office.yieldbroker.com/ews
>>>> # Not configured on exchange 2010
>>>> #acl exch_url url_regex -i office.yieldbroker.com/autodiscover
>>>>
>>>> # Send the Exchange URLs to the Exchange server
>>>> cache_peer_access exchangeServer allow exch_url
&g

Re: [squid-users] squid auth

2015-12-08 Thread Alex Samad
Hi

So what your saying is I should install the mskutil and let it manage
the squid krb keytab file.


Could you possible help with the changed to the squid.conf file do I
leave as is and just add kerberos first ?


On 8 December 2015 at 20:03, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 8/12/2015 7:44 p.m., Alex Samad wrote:
>> Hi
>>
>> Currently using 3.1 (from centos 6)
>> I have setup squid to auth against MS AD
>>
>> I have
>> # ###
>> # Negotiate
>> # ###
>>
>> # http://wiki.squid-cache.org/Features/Authentication
>> # http://wiki.squid-cache.org/Features/NegotiateAuthentication
>> auth_param negotiate program /usr/bin/ntlm_auth
>> --helper-protocol=gss-spnego --configfile /etc/samba/smb.conf-squid
>> auth_param negotiate children 10 startup=0 idle=3
>> auth_param negotiate keep_alive on
>>
>> # ###
>> # NTLM AUTH
>> # ###
>>
>> # ntlm auth
>> auth_param ntlm program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-ntlmssp --configfile
>> /etc/samba/smb.conf-squid
>> auth_param ntlm children 10
>> #auth_param ntlm children 10 startup=0 idle=3
>> #auth_param ntlm keep_alive
>>
>>
>> # ###
>> # NTLM over basic
>> # ###
>>
>> # warning: basic authentication sends passwords plaintext
>> # a network sniffer can and will discover passwords
>> auth_param basic program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic --configfile
>> /etc/samba/smb.conf-squid
>> auth_param basic children 5
>> auth_param basic realm Squid proxy-caching web server
>> auth_param basic credentialsttl 2 hours
>>
>>
>> I want to move towards using kerberos come to this page
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
>>
>> worked through that, but i saw this
>>
>> Do not use this method if you run winbindd or other samba services as
>> samba will reset the machine password every x days and thereby makes
>> the keytab invalid !!
>
>
> As I understand it that disclaimer applies only to the "OR with Samba"
> instructions for keytab creation directly above it. The other two
> methods should work.
>
> Also, it is just a disclaimer about a known problem. There is always the
> option to setup a script that re-builds the keytab and reloads Squid
> every X days when it changes.
>
>>
>> I have winbindd running for my users list in linux
>>
>> is there a way around this and if not how
>>
>
> The initial mskutil method of keytab creation is both a way around it
> and the preferred method of keytab creation.
>
> As you found elsewhere ...
>
>> then found this one
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory
>>
>> but I am not using msktutil, i do have samba and the krb-workstation 
>> installed
>>
>
> mskutil is just a tool to generate keytabs and link the machine to
> domain. I *think* it should still be usable even if you have Sambe, the
> probem is just that if you let Samba know about the keytab and account
> it will do the periodic updates.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid auth

2015-12-08 Thread Alex Samad
so when I do kinit I should use a different account to the samba one.

I'm lost sorry.

when I attach with winbind, I kinit with my personal admin account and
also do a net ads join -U .

the password on the  doesn't / hasn't changed.

are you talking about the computer account password ?

if so, then I setup a different computer account for the squid
kerberos application !


On 9 December 2015 at 07:20, Markus Moeller <hua...@moeller.plus.com> wrote:
> Hi,
>
>   The issue appears if you use the same AD account for samba and the
> kerberos keytab creation.  As samba will reset the password of the AD
> account and thereby invalidate the extracted keytab.
>
> Markus
>
>
> "Alex Samad"  wrote in message
> news:CAJ+Q1PW9Ue4zdT9GCt-4MjW=UjDWyBOPc4AFrcjG=qfnewm...@mail.gmail.com...
>
>
> Hi
>
> So what your saying is I should install the mskutil and let it manage
> the squid krb keytab file.
>
>
> Could you possible help with the changed to the squid.conf file do I
> leave as is and just add kerberos first ?
>
>
> On 8 December 2015 at 20:03, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>
>> On 8/12/2015 7:44 p.m., Alex Samad wrote:
>>>
>>> Hi
>>>
>>> Currently using 3.1 (from centos 6)
>>> I have setup squid to auth against MS AD
>>>
>>> I have
>>> # ###
>>> # Negotiate
>>> # ###
>>>
>>> # http://wiki.squid-cache.org/Features/Authentication
>>> # http://wiki.squid-cache.org/Features/NegotiateAuthentication
>>> auth_param negotiate program /usr/bin/ntlm_auth
>>> --helper-protocol=gss-spnego --configfile /etc/samba/smb.conf-squid
>>> auth_param negotiate children 10 startup=0 idle=3
>>> auth_param negotiate keep_alive on
>>>
>>> # ###
>>> # NTLM AUTH
>>> # ###
>>>
>>> # ntlm auth
>>> auth_param ntlm program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-ntlmssp --configfile
>>> /etc/samba/smb.conf-squid
>>> auth_param ntlm children 10
>>> #auth_param ntlm children 10 startup=0 idle=3
>>> #auth_param ntlm keep_alive
>>>
>>>
>>> # ###
>>> # NTLM over basic
>>> # ###
>>>
>>> # warning: basic authentication sends passwords plaintext
>>> # a network sniffer can and will discover passwords
>>> auth_param basic program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-basic --configfile
>>> /etc/samba/smb.conf-squid
>>> auth_param basic children 5
>>> auth_param basic realm Squid proxy-caching web server
>>> auth_param basic credentialsttl 2 hours
>>>
>>>
>>> I want to move towards using kerberos come to this page
>>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
>>>
>>> worked through that, but i saw this
>>>
>>> Do not use this method if you run winbindd or other samba services as
>>> samba will reset the machine password every x days and thereby makes
>>> the keytab invalid !!
>>
>>
>>
>> As I understand it that disclaimer applies only to the "OR with Samba"
>> instructions for keytab creation directly above it. The other two
>> methods should work.
>>
>> Also, it is just a disclaimer about a known problem. There is always the
>> option to setup a script that re-builds the keytab and reloads Squid
>> every X days when it changes.
>>
>>>
>>> I have winbindd running for my users list in linux
>>>
>>> is there a way around this and if not how
>>>
>>
>> The initial mskutil method of keytab creation is both a way around it
>> and the preferred method of keytab creation.
>>
>> As you found elsewhere ...
>>
>>> then found this one
>>>
>>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory
>>>
>>> but I am not using msktutil, i do have samba and the krb-workstation
>>> installed
>>>
>>
>> mskutil is just a tool to generate keytabs and link the machine to
>> domain. I *think* it should still be usable even if you have Sambe, the
>> probem is just that if you let Samba know about the keytab and account
>> it will do the periodic updates.
>>
>> Amos
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy infront of exchange 2010

2015-12-07 Thread Alex Samad
Hi

Any suggestions on how to debug this... I wouldn't mind rolling
forward to 3.5 again

On 2 December 2015 at 20:39, Alex Samad <a...@samad.com.au> wrote:
> Just to add to this I have a lot of these in the log file
>
> TCP_MISS_ABORTED/000 0 RPC_IN_DATA
> TCP_MISS_ABORTED/200 4322 RPC_OUT_DATA
> TCP_MISS_ABORTED/000 0 RPC_IN_DATA https:
>
>
>
>
>
>
> On 2 December 2015 at 17:24, Alex Samad <a...@samad.com.au> wrote:
>> Hi
>>
>> recently upgraded to squid-3.5.11-1.el6.x86_64 from the centos 6.7  squid 3.1
>>
>>
>> I am now having problems with people who use active sync via this
>> connection . seems like emails with attachments aren't making it
>> through .
>>
>> cache_peer 10.32.69.11 parent 443 0 proxy-only no-query no-digest
>> originserver login=PASS front-end-https=on ssl
>> sslflags=DONT_VERIFY_PEER sslcert=/etc/httpd/conf.d/office.yx.com.crt
>> sslkey=/etc/httpd/conf.d/office.yx.com.key name=exchangeServer
>>
>>
>> cache_peer 127.0.0.1 parent 443 0 proxy-only no-query no-digest
>> originserver login=PASS ssl sslflags=DONT_VERIFY_PEER
>> sslcert=/etc/httpd/conf.d/office.yx.com.crt
>> sslkey=/etc/httpd/conf.d/office.yx.com.key name=webServer
>> c
>>
>> # List of acceptable URLs to send to the Exchange server
>> acl exch_url url_regex -i office.yieldbroker.com/exchange
>> acl exch_url url_regex -i office.yieldbroker.com/exchweb
>> acl exch_url url_regex -i office.yieldbroker.com/public
>> acl exch_url url_regex -i office.yieldbroker.com/owa
>> acl exch_url url_regex -i office.yieldbroker.com/ecp
>> acl exch_url url_regex -i office.yieldbroker.com/microsoft-server-activesync
>> acl exch_url url_regex -i office.yieldbroker.com/rpc
>> acl exch_url url_regex -i office.yieldbroker.com/rpcwithcert
>> acl exch_url url_regex -i office.yieldbroker.com/exadmin
>> acl exch_url url_regex -i office.yieldbroker.com/oab
>> # added after
>> acl exch_url url_regex -i office.yieldbroker.com/ews
>> # Not configured on exchange 2010
>> #acl exch_url url_regex -i office.yieldbroker.com/autodiscover
>>
>> # Send the Exchange URLs to the Exchange server
>> cache_peer_access exchangeServer allow exch_url
>>
>> # Send everything else to the Apache
>> cache_peer_access webServer deny exch_url
>>
>> # This is to protect Squid
>> never_direct allow exch_url
>>
>> # Logging Configuration
>> redirect_rewrites_host_header off
>> cache_mem 32 MB
>> maximum_object_size_in_memory 128 KB
>> cache_log none
>> cache_store_log none
>>
>> access_log stdio:/var/log/squid/office-access.log squid
>> #access_log none
>> cache_log /var/log/squid/office-cache.log
>> #cache_log none
>> pid_filename /var/run/squid-office.pid
>>
>>
>> # Set the hostname so that we can see Squid in the path (Optional)
>> visible_hostname yieldbroker.com
>> deny_info TCP_RESET all
>>
>> # ACL - required to allow
>> #acl all src ALL
>>
>> # Allow everyone through, internal and external connections
>> http_access allow all
>> miss_access allow all
>>
>> icp_port 0
>> snmp_port 0
>>
>> via off
>>
>>
>> The previous setup had worked for at least 18 months.
>>
>> Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] chrome proxy issue

2015-12-06 Thread Alex Samad
Hi

https://code.google.com/p/chromium/issues/detail?id=544255
Not a squid issue, but might stop people wasting time debugging squid

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rollback squid

2015-12-02 Thread Alex Samad
Discard you mean delete .. the cache directories

if so

I currently have 3 directories, is this an opportunity to consolidate
down to 1 directory is that better ?

On 3 December 2015 at 03:03, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 3/12/2015 12:30 a.m., Alex Samad wrote:
>> Hi
>>
>> I am rolling back from 3.5 to 3.1
>>
>> my cache directory was updated for the 3.1 to 3.5.
>>
>> Is there going to be an issue when i roll back ?
>
> Yes, you will have to discard the current cache and start with it empty.
> The old Squid cannot cope with the new format.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] setting up cache peering

2015-12-02 Thread Alex Samad
Hi

Thanks I will do when I get back to 3.5. Had to roll back because of
my issues with 3.5 and reverse proxy and outlook.

Are these suggestions still valid with 3.1 ?

Thanks

On 3 December 2015 at 03:22, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 2/12/2015 6:50 p.m., Alex Samad wrote:
>> Hi
>>
>> I recently moved to squid-3.5.11-1.el6.x86_64 on centos 6.7.
>>
>> from the centos 3.1 i think ?
>>
>> This what I had originall
>> #cache_peer gsdmz1.xy.com sibling 3128 3130 proxy-only
>> #cache_peer alcdmz1.xy.com sibling 3128 3130 proxy-only
>>
>> I had a shared config between the 2 server gsdmz1 and alcdmz1. I would
>> uncomment 1 or the other depending.
>>
>> during my upgrade I coped the gsdmz1 squid config over to alcdmz1 but
>> forgot to uncomment the
>> cache_peer alcdmz1.xy.com sibling 3128 3130 proxy-only
>>
>> so alcdmz1 was talking to itself at times.
>>
>> using this as my test
>> wget -d   
>> http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
>> -O /dev/null
>>
>> and setting http_proxy to either alc or gsdmz1 I would get a 504 error.
>>
>>
>> wget -d  
>> http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
>> -O /dev/null
>> Setting --output-document (outputdocument) to /dev/null
>> DEBUG output created by Wget 1.12 on linux-gnu.
>>
>> --2015-12-02 16:35:34--
>> http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
>> Resolving alcdmz1... 10.3.2.111
>> Caching alcdmz1 => 10.3.2.111
>> Connecting to alcdmz1|10.3.2.111|:3128... connected.
>> Created socket 4.
>> Releasing 0x01ea5db0 (new refcount 1).
>>
>> ---request begin---
>> GET 
>> http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
>> HTTP/1.0
>> User-Agent: Wget/1.12 (linux-gnu)
>> Accept: */*
>> Host: www.smh.com.au
>>
>> ---request end---
>> Proxy request sent, awaiting response...
>> ---response begin---
>> HTTP/1.1 504 Gateway Timeout
>> Server: squid
>> Mime-Version: 1.0
>> Date: Wed, 02 Dec 2015 05:35:34 GMT
>> Content-Type: text/html;charset=utf-8
>> Content-Length: 4063
>> X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
>> Vary: Accept-Language
>> Content-Language: en
>> X-Cache: MISS from gsdmz1
>> X-Cache-Lookup: MISS from gsdmz1:3128
>> X-Cache: MISS from alcdmz1
>> X-Cache-Lookup: MISS from alcdmz1:3128
>> Via: 1.1 gsdmz1 (squid), 1.1 alcdmz1 (squid)
>> Connection: close
>>
>> ---response end---
>> 504 Gateway Timeout
>> Closed fd 4
>> 2015-12-02 16:35:34 ERROR 504: Gateway Timeout.
>>
>
> *timeout* is terribly bad. Turn the Via header back on. Its sole purpose
> is to let the peers reject messages that are looping like that one.
>
>>
>> I changed the line to be
>> cache_peer gsdmz1.xy.com sibling 3128 3130 proxy-only standby=50
>> on the alcdmz1 box
>> and
>> cache_peer alcdmz1.xy.com sibling 3128 3130 proxy-only standby=50
>> on the gsdmz1 box
>>
>> but this still gave me 504 errors ?
>
> Notice how the difference in configurations is that you added
> standby=50. It should not be having any effect that we know of, but does
> mean that connections are pre-opened to the sibling and thus have a much
> lower latency than any normal TCP connection. If your Squid is searching
> for fastest-route using the netdb latency tables that could be the
> opposite of what you need.
>
>> I tried to force a new version through both proxies by using wget with
>> no-cache option.  But that didn't help.
>
> Sending "no-cache" from the client makes it worse, since that prevents
> HIT from happening on either peer.
>
> When combined with "cache_peer ... proxy-only" configuration it prevents
> any traffic that goes through a peer from being cached.
>
>>
>> So what went wrong, how can I flush out the stale 504.
>
> It is not cached. There is nothing to flush (except perhapse the standby
> connections, see above).
>
>> What is the best way to setup the 2 proxies to talk to each other
>> before going out to the internet.
>
> That depends on the proxies, version, and what you want them to do.
>
>>
>> the proxies run on a pacemaker cluster. I have 2 vip's setup as the
>> proxy addresses, in normal conditions these address are setup 1 on
>> each server. whilst working on a server I can move the vip and not
>&g

Re: [squid-users] rollback squid

2015-12-02 Thread Alex Samad
:)

Okay done

is a VM on a single VMDK..
10G nics (virtual and physical)



On 3 December 2015 at 14:27, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 3/12/2015 9:18 a.m., Alex Samad wrote:
>> Discard you mean delete .. the cache directories
>>
>
> Yes, and redo the squid -z process to partition new one(s).
>
>> if so
>>
>> I currently have 3 directories, is this an opportunity to consolidate
>> down to 1 directory is that better ?
>
> Another tricky question with "it depends" as the answer.
>
> If you have a fast (Gbit) network and want lowest latency possible,
> removing cache_dir entirely is best. But you will pay for latency gains
> in bandwidth from the lower HIT ratio.
>
> If the cache_dir are of UFS/AUFS/diskd type and on the same HDD. Then
> there will definitely be speed gains from removing 2 of them. If they
> are on different HDD disk/spindles then you only gain if the removed
> ones have slow RPM speeds, commonly have errors, or are in use a lot by
> other processes.
>
> Amos
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] setting up cache peering

2015-12-01 Thread Alex Samad
Hi

I recently moved to squid-3.5.11-1.el6.x86_64 on centos 6.7.

from the centos 3.1 i think ?

This what I had originall
#cache_peer gsdmz1.xy.com sibling 3128 3130 proxy-only
#cache_peer alcdmz1.xy.com sibling 3128 3130 proxy-only

I had a shared config between the 2 server gsdmz1 and alcdmz1. I would
uncomment 1 or the other depending.

during my upgrade I coped the gsdmz1 squid config over to alcdmz1 but
forgot to uncomment the
cache_peer alcdmz1.xy.com sibling 3128 3130 proxy-only

so alcdmz1 was talking to itself at times.

using this as my test
wget -d   
http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
-O /dev/null

and setting http_proxy to either alc or gsdmz1 I would get a 504 error.


wget -d  
http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
-O /dev/null
Setting --output-document (outputdocument) to /dev/null
DEBUG output created by Wget 1.12 on linux-gnu.

--2015-12-02 16:35:34--
http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
Resolving alcdmz1... 10.3.2.111
Caching alcdmz1 => 10.3.2.111
Connecting to alcdmz1|10.3.2.111|:3128... connected.
Created socket 4.
Releasing 0x01ea5db0 (new refcount 1).

---request begin---
GET 
http://www.smh.com.au/business/markets-live/markets-live-investors-take-stock-20151201-gld1lu.html
HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: www.smh.com.au

---request end---
Proxy request sent, awaiting response...
---response begin---
HTTP/1.1 504 Gateway Timeout
Server: squid
Mime-Version: 1.0
Date: Wed, 02 Dec 2015 05:35:34 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 4063
X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from gsdmz1
X-Cache-Lookup: MISS from gsdmz1:3128
X-Cache: MISS from alcdmz1
X-Cache-Lookup: MISS from alcdmz1:3128
Via: 1.1 gsdmz1 (squid), 1.1 alcdmz1 (squid)
Connection: close

---response end---
504 Gateway Timeout
Closed fd 4
2015-12-02 16:35:34 ERROR 504: Gateway Timeout.


I changed the line to be
cache_peer gsdmz1.xy.com sibling 3128 3130 proxy-only standby=50
on the alcdmz1 box
and
cache_peer alcdmz1.xy.com sibling 3128 3130 proxy-only standby=50
on the gsdmz1 box

but this still gave me 504 errors ?
I tried to force a new version through both proxies by using wget with
no-cache option.  But that didn't help.


So what went wrong, how can I flush out the stale 504.
What is the best way to setup the 2 proxies to talk to each other
before going out to the internet.

the proxies run on a pacemaker cluster. I have 2 vip's setup as the
proxy addresses, in normal conditions these address are setup 1 on
each server. whilst working on a server I can move the vip and not
affect any one.


But I would like to take benefit of each others cache, whats the best
setup for cache_peer in this setup.

Neither server is closer to the internet.

thanks
A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] issue with start / stop scripts

2015-11-28 Thread Alex Samad
Hi

yeah from the rpms. I found the variables to lengthen the timeout period.

But I got in the strange situation where the pid file was still there
(shutdown took longer than the timeout). and the scripts still thought
it was running, so stop would fail as it does a check first. do we
need to do a check first on shutdown ??

A

On 29 November 2015 at 09:14, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> What script are you using?
> If it's from my RPMs I might be able to patch it and make sure it will work
> better.
>
> Eliezer
>
> On 27/11/2015 08:09, Alex Samad wrote:
>>
>> Hi
>>
>> I have a rather long list of blocked address in my squid config.
>> and the default start stop timeout values are a bit short for my setup.
>>
>> when i did stop it failed because the time to parse the config took to
>> long. any reason it needs to parse to shutdown ?
>>
>> that left the pid file behind, which causes stop to fail again as
>> squid -k check -f /etc/squid/squid.conf
>>
>> fails no running process
>>
>> Alex
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] issue with start / stop scripts

2015-11-28 Thread Alex Samad
Hi

its in the scripts
stop() {
echo -n $"Stopping $prog: "
$SQUID -k check -f $SQUID_CONF >> /var/log/squid/squid.out 2>&1
RETVAL=$?
if [ $RETVAL -eq 0 ] ; then


Any reason to check the config before stopping a running squid ??


On 29 November 2015 at 09:32, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> A check on what?
> Basically to verify if squid is still running you need to verify that there
> are is not one squid instance running.
> The PID is kind of a hack to make sure squid is still there or not.
> In most cases you can cancel the timeout and check only for the PID.
> Also notice that there is a "rm -rf" there which was inherited from an old
> script that I got as a "gift" since my own script got lost in a server
> migration.
>
> You can run three checks in parallel:
> - the pid exists or not
> - the process exists or not(using "ps aux|grep squid")
> - check if the port in netstat is still in listening mode.
>
> Hope it helps,
> Eliezer
>
>
> On 29/11/2015 00:21, Alex Samad wrote:
>>
>> Hi
>>
>> yeah from the rpms. I found the variables to lengthen the timeout period.
>>
>> But I got in the strange situation where the pid file was still there
>> (shutdown took longer than the timeout). and the scripts still thought
>> it was running, so stop would fail as it does a check first. do we
>> need to do a check first on shutdown ??
>>
>> A
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] centos 6 install

2015-11-27 Thread Alex Samad
On 27 November 2015 at 17:56, Amos Jeffries  wrote:
>> Hi
>>
>> it was in the bottom of the previous mail, thats a copy of the log
>> starting from the start up
>
> Exactly. The new install of Squid is a newer version. With a new format
> of cache storage, updated data corruption protection, and detection.
>
> The old install was so old it had a v1 cache format, which is a clear
> sign that the old version also lacked some of those corruption
> protections. It is not unexpected to see signs of that lack in the old
> cache contents.

Great thanks.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] centos 6 install

2015-11-26 Thread Alex Samad
517a912a094501f226e715637e94bb63  /root/squid-3.5.11-1.el6.x86_64.rpm


cat /etc/yum.repos.d/squid.repo
#
# http://wiki.squid-cache.org/KnowledgeBase/CentOS
#
#

[squid]
name=Squid repo for CentOS Linux - $basearch
#IL mirror
baseurl=http://www1.ngtech.co.il/repo/centos/$releasever/$basearch/
failovermethod=priority
enabled=1
gpgcheck=0



On 27 November 2015 at 10:21, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> Where did you downloaded the rpm from?
> my repo at ngtech.co.il? or compiled it yourself?
> Make sure that the md5sum is the same
> $md5sum squid-3.5.11-1.el6.x86_64.rpm
> 517a912a094501f226e715637e94bb63  squid-3.5.11-1.el6.x86_64.rpm
> The checksums are at:
> http://www1.ngtech.co.il/repo/centos/6/x86_64/squid-3.5.11-1.el6.x86_64.rpm.asc
>
> Eliezer
>
>
> On 27/11/2015 01:00, Alex Samad wrote:
>>
>> Hi
>>
>> I am trying to upgrade from the centos squid to the squid one
>>   rpm -qa | grep squid
>> squid-3.1.23-9.el6.x86_64
>> rpm -Uvh squid-3.5.11-1.el6.x86_64.rpm
>>
>>
>> getting this error
>> error: unpacking of archive failed on file
>> /usr/share/squid/errors/zh-cn: cpio: rename failed - Is a directory
>>
>>
>> ls -l
>> drwxr-xr-x. 2 root root 4096 Sep 16 13:05 zh-cn
>> lrwxrwxrwx. 1 root root7 Nov 27 09:57 zh-cn;56578e40 -> zh-hans
>> lrwxrwxrwx. 1 root root7 Nov 27 09:58 zh-cn;56578e77 -> zh-hans
>>
>> going to remove the directory and try re installing
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] centos 6 install

2015-11-26 Thread Alex Samad
/11/27 11:02:41 kid1| Adaptation support is on
2015/11/27 11:02:41 kid1| Accepting HTTP Socket connections at
local=[::]:3128 remote=[::] FD 14 flags=9
2015/11/27 11:02:41 kid1| Accepting HTTP Socket connections at
local=[::]:8080 remote=[::] FD 15 flags=9
2015/11/27 11:02:41 kid1| Accepting ICP messages on [::]:3130
2015/11/27 11:02:41 kid1| Sending ICP messages from [::]:3130
2015/11/27 11:03:33 kid1| WARNING: Ignoring malformed cache entry.
2015/11/27 11:03:33 kid1| WARNING: Ignoring malformed cache entry.
2015/11/27 11:04:26 kid1| Done scanning /var/spool/squid dir (153502 entries)
2015/11/27 11:04:44 kid1| WARNING: Ignoring malformed cache entry.
2015/11/27 11:06:15 kid1| WARNING: Ignoring malformed cache entry.
20

On 27 November 2015 at 10:55, Alex Samad <a...@samad.com.au> wrote:
> 517a912a094501f226e715637e94bb63  /root/squid-3.5.11-1.el6.x86_64.rpm
>
>
> cat /etc/yum.repos.d/squid.repo
> #
> # http://wiki.squid-cache.org/KnowledgeBase/CentOS
> #
> #
>
> [squid]
> name=Squid repo for CentOS Linux - $basearch
> #IL mirror
> baseurl=http://www1.ngtech.co.il/repo/centos/$releasever/$basearch/
> failovermethod=priority
> enabled=1
> gpgcheck=0
>
>
>
> On 27 November 2015 at 10:21, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
>> Where did you downloaded the rpm from?
>> my repo at ngtech.co.il? or compiled it yourself?
>> Make sure that the md5sum is the same
>> $md5sum squid-3.5.11-1.el6.x86_64.rpm
>> 517a912a094501f226e715637e94bb63  squid-3.5.11-1.el6.x86_64.rpm
>> The checksums are at:
>> http://www1.ngtech.co.il/repo/centos/6/x86_64/squid-3.5.11-1.el6.x86_64.rpm.asc
>>
>> Eliezer
>>
>>
>> On 27/11/2015 01:00, Alex Samad wrote:
>>>
>>> Hi
>>>
>>> I am trying to upgrade from the centos squid to the squid one
>>>   rpm -qa | grep squid
>>> squid-3.1.23-9.el6.x86_64
>>> rpm -Uvh squid-3.5.11-1.el6.x86_64.rpm
>>>
>>>
>>> getting this error
>>> error: unpacking of archive failed on file
>>> /usr/share/squid/errors/zh-cn: cpio: rename failed - Is a directory
>>>
>>>
>>> ls -l
>>> drwxr-xr-x. 2 root root 4096 Sep 16 13:05 zh-cn
>>> lrwxrwxrwx. 1 root root7 Nov 27 09:57 zh-cn;56578e40 -> zh-hans
>>> lrwxrwxrwx. 1 root root7 Nov 27 09:58 zh-cn;56578e77 -> zh-hans
>>>
>>> going to remove the directory and try re installing
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] centos 6 install

2015-11-26 Thread Alex Samad
Hi

I am trying to upgrade from the centos squid to the squid one
 rpm -qa | grep squid
squid-3.1.23-9.el6.x86_64
rpm -Uvh squid-3.5.11-1.el6.x86_64.rpm


getting this error
error: unpacking of archive failed on file
/usr/share/squid/errors/zh-cn: cpio: rename failed - Is a directory


ls -l
drwxr-xr-x. 2 root root 4096 Sep 16 13:05 zh-cn
lrwxrwxrwx. 1 root root7 Nov 27 09:57 zh-cn;56578e40 -> zh-hans
lrwxrwxrwx. 1 root root7 Nov 27 09:58 zh-cn;56578e77 -> zh-hans

going to remove the directory and try re installing
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] delay pools question

2015-10-25 Thread Alex Samad
HI

I have had a look at http://wiki.squid-cache.org/Features/DelayPools

Wondering if somebody can maybe explain how it rate limits downloads.

So I can understand it would be able to limit proxy to client traffic
as squid is the sender and can limit how it sends.

But if I want to limit speed from say microsoft.com to the
organisation how does it organise that.

My limited understanding is you make a request of the ms web servers
and then they send it as fast as they can.

The only way I can think of it happening is slowing the TCP ACK's.  Or
does squid make request for partial ranges of files such as to fit in
the speed requirements.

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] config Q

2015-10-23 Thread Alex Samad
On 24 October 2015 at 15:01, Amos Jeffries  wrote:
> Set the cache_peer sslcafile= option with the PEM file containing the CA
> that was used to sign the office.abc.com server certificate.

Do i need to do that if the signing CA is part of the OS root bundle ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] config Q

2015-10-23 Thread Alex Samad
Hi

I have squid on centos 6. the version that comes with it unfortunately.

I have configured it to be a reverse proxy to our exchange box.

so it answers on office.abc.com
now I have 2 cache peers setup

10.1.1.1. the exchange box << all the predefined URIs go here
127.0.0.1 443 the rest go here.

Its https to 127.0.0.1.

I have sslflags=DONT_VERIFY_PEER in the cache_peer command. It was
suggest to remove this.

But the cert on the end of 127.0.0.1 is office.abc.com. I can't use
cache_peer office.abc.com because it will just hit the squid box.

I also have the cert define sslcert=/etc/httpd/conf.d/office.abc.com.crt

Is that going to cause an issue, the is no subjAlt for 127 in the cert
name. will squid just check the certs.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NTLM Authentication Failing

2015-10-21 Thread Alex Samad
Would it be fair to say best practice  is to get kerbose working in favour
of ntlm ?
On 21/10/2015 3:18 PM, "Amos Jeffries"  wrote:

> On 2015-10-21 15:38, Ilias Clifton wrote:
>
>>
>>> On 20/10/2015 4:04 p.m., Ilias Clifton wrote:
>>> > Hi All,
>>> > I've been following the guide at this location for Active Directory
>>> integration
>>> >
>>> http://wiki.bitbinary.com/index.php/Active_Directory_Integrated_Squid_Proxy[http://wiki.bitbinary.com/index.php/
>>> >Active_Directory_Integrated_Squid_Proxy]
>>> >
>>> > First, some versions for sanity..
>>> > Ubuntu : 14.04.3 LTS
>>> > Squid : 3.3.8 (from ubuntu repositories)
>>> > Samba : 4.1.6-Ubuntu
>>> > DC : Windows Server 2012 R2
>>> >
>>> > I am currently testing the authentication, negotiate kerberos and
>>> basic ldap are
>>> > both working correctly. However ntlm is not and I don't seem to making
>>> any
>>> > progress on debugging further.
>>>
>>> Date: Tue, 20 Oct 2015 18:06:17 +1300
>>> From: Amos Jeffries 
>>>
>>>
>>>
>>> Your version of Squid has big problems with (4) and some with (2), and
>>> your DC server version has big problems with (1) and (3).
>>>
>>>
>>> Amos
>>>
>>>
>>>
>>>
>> Hi Amos,
>>
>> Thank you for your detailed answer.
>>
>> So what is the best way to authenticate users in a mixed environment?
>> I've got Windows domain PCs with IE/firefox/chrome. Linux PCs with
>> Firefox/chrome. Windows non-domain joined PCs with IE/firefox/chrome -
>> plus various mobile devices.
>>
>> I've tried getting rid of ntlm and just using negotiate kerberos and
>> ldap for basic, is that all I need?
>>
>
> I believe thats at least very close to the solution. The getting rid of
> NTLM is something that needs to happen at the client end though, so IE does
> not attempt to use it over Negotiate scheme.
>
>
>
>> On the non-domain joined PCs, if I disable 'Enable Integrated Windows
>> Authentication', they now correctly use basic ldap.
>>
>
> And thats the way to do it IIRC. Someone more familiar may know a better
> way.
>
>
>
>> My config now looks like..
>>
>> ### negotiate kerberos and ntlm authentication
>> auth_param negotiate program /usr/lib/squid3/negotiate_kerberos_auth
>> -d -s GSS_C_NO_NAME
>> auth_param negotiate children 10
>> auth_param negotiate keep_alive off
>>
>> ### provide basic authentication via ldap for clients not
>> authenticated via kerberos/ntlm
>> auth_param basic program /usr/lib/squid3/basic_ldap_auth -R -b
>> "DC=domain,DC=local" -D proxyuser at domain.local -W
>> /etc/squid3/ldappass.txt -f sAMAccountName=%s -h dc1.domain.local
>> auth_param basic children 10
>> auth_param basic realm Internet Proxy
>> auth_param basic credentialsttl 30 minutes
>>
>> ### ldap authorisation
>> external_acl_type memberof %LOGIN /usr/lib/squid3/ext_ldap_group_acl
>> -R -K -S -b "DC=domain,DC=local" -D proxyuser at domain.local -W
>> /etc/squid3/ldappass.txt -f
>>
>> "(&(objectclass=person)(sAMAccountName=%v)(memberof=cn=%g,OU=Proxy,DC=domain,DC=local))"
>> -h dc1.domain.local
>>
>> Does that look ok?
>>
>
> Looks reasonable for a small installation. If you have a medium to large
> network you may find Squid mentioning queue issues and requesting more
> helper children be configured. Simply increasing the numbers there should
> resolve that.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] winbind interface

2015-09-02 Thread Alex Samad
# ###
# Negotiate
# ###

# http://wiki.squid-cache.org/Features/Authentication
# http://wiki.squid-cache.org/Features/NegotiateAuthentication
auth_param negotiate program /usr/bin/ntlm_auth
--helper-protocol=gss-spnego --configfile /etc/samba/smb.conf-squid
auth_param negotiate children 10 startup=0 idle=1
auth_param negotiate keep_alive on

# ###
# NTLM AUTH
# ###

# ntlm auth
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp --configfile /etc/samba/smb.conf-squid
auth_param ntlm children 10
#auth_param ntlm children 10 startup=0 idle=1
#auth_param ntlm keep_alive

# ###
# NTLM over basic
# ###

# warning: basic authentication sends passwords plaintext
# a network sniffer can and will discover passwords
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic --configfile /etc/samba/smb.conf-squid
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

On 2 September 2015 at 11:15, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 2/09/2015 11:50 a.m., Alex Samad wrote:
>> Hi
>>
>> I have squid setup to use
>> NTLM and then faill back to basic.
>>
>> when it fails back to basic, my user put in
>>
>> firstname.surname@a.b.c which fails.
>>
>> if they put in firstname.surname it works
>>
>> is there some way to get squid to strip off the @<.*>
>
> That depends on which helper you are using to validate the Basic auth
> credentials. The ones which support it do so via a command line
> parameter. So check our helpers documentation to see if one exists to
> strip Kerberos/NTLM/Domain.
>
> Otherwise you can always script a helper for yourself.
>
>>
>> also is there some way to change the info in the dialogue box that pops
up
>
> The only controllable part of the popup dialog is the Realm value. Set
> by the auth_param directives "realm" parameter.
>
> IIRC the realm is usually turned into the title bar, though some
> browsers show it in quotes in the text. The form and display of the
> popup is fixed and not manipulatable by any external server for security
> reasons that should be obvious.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] winbind interface

2015-09-01 Thread Alex Samad
Hi

I have squid setup to use
NTLM and then faill back to basic.

when it fails back to basic, my user put in

firstname.surname@a.b.c  which fails.

if they put in firstname.surname it works

is there some way to get squid to strip off the @<.*>

also is there some way to change the info in the dialogue box that pops up
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] caching question

2015-08-25 Thread Alex Samad
Hi

I want to get squid to not cache urls that terminate like this

updates/x86_64/repodata/repomd.xml
os/x86_64/repodata/repomd.xml

How do I organize that.

Having problems with old repmod.xml files making my yum updates fail..

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] caching question

2015-08-25 Thread Alex Samad
Hi

Sorry add more info

I have this already in my squid.conf
acl nonCacheDom dstdomain -i /etc/squid/lists/nonCacheDom.lst
cache deny nonCacheDom


I presume i can add something similiar but with urlpath_regex

acl nonCacheURL urlpath_regex .*/x86_64/repodata/repomd.xml
cache deny nonCacheURL


A

On 26 August 2015 at 11:56, Alex Samad a...@samad.com.au wrote:
 Hi

 I want to get squid to not cache urls that terminate like this

 updates/x86_64/repodata/repomd.xml
 os/x86_64/repodata/repomd.xml

 How do I organize that.

 Having problems with old repmod.xml files making my yum updates fail..

 Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] caching question

2015-08-25 Thread Alex Samad
Hi

Sorry, answered my own question.

acl nonCacheURL urlpath_regex .*/x86_64/repodata/repomd.xml$
cache deny nonCacheURL

seems like  makes it look for a file ?

On 26 August 2015 at 11:59, Alex Samad a...@samad.com.au wrote:
 Hi

 Sorry add more info

 I have this already in my squid.conf
 acl nonCacheDom dstdomain -i /etc/squid/lists/nonCacheDom.lst
 cache deny nonCacheDom


 I presume i can add something similiar but with urlpath_regex

 acl nonCacheURL urlpath_regex .*/x86_64/repodata/repomd.xml
 cache deny nonCacheURL


 A

 On 26 August 2015 at 11:56, Alex Samad a...@samad.com.au wrote:
 Hi

 I want to get squid to not cache urls that terminate like this

 updates/x86_64/repodata/repomd.xml
 os/x86_64/repodata/repomd.xml

 How do I organize that.

 Having problems with old repmod.xml files making my yum updates fail..

 Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.5 CentOS RPMs release

2015-06-28 Thread Alex Samad
Thanks

On 29 June 2015 at 00:59, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey list,

 I have created the new RPM's for CentOS 6 and 7 while not mentioning I also
 created the package for OracleLinux.(which was very annoy to find out that
 the download file from Oracle was not matching an ISO but something else)

 The 3.5.5 and 3.5.4 was published here:
 http://www1.ngtech.co.il/wpe/?p=90

 Eliezer

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mikrotik and Squid Transparent

2015-06-28 Thread Alex Samad
Hi

Thought I would re word what i got from this, see if I understood.

If squid and router (default gateway) are on the same box
then
DNAT to the SQUID listening port and local ip (Can you use localhost
suppose it doesn't matter)
else
router the packet to the SQUID box (if possible)
DNAT on the SQUID box to the local listening port and ip


Squid is able to look in the NAT table ? to confirm what the
destination would be not what the DNAT'ed ip would be.


Does that sum it up ?


Alex



On 28 June 2015 at 21:11, Amos Jeffries squ...@treenet.co.nz wrote:
 On 28/06/2015 10:37 p.m., Dalmar wrote:
 To begin with, thank you Marcel,Alex and Amos for your help guys i am
 really so close because of you. I have done exactly what Marcel told me and
 now all transparent/intercept errors are gone. It worked nicely when i used
 two mikrotiks one for WAN and the other for the LAN connection, however,
 when i use one mikrotik it says TCP_MISS_ABORTED and NONE_ABORTED. In this
 situation ,squid gets internet from the MK LAN port using a public IP and i
 can ping the net, but squid throws the above error in the access.log. The
 topo i wanna use is INTERNET MK  SQUID .
 i think the iptable rules will change.The Mikrotik have 3 NICS now , but i
 can add 1 more so it becomes eth0:WAN eth1:LAN eth2:PROXY-LAN
 eth3:PROXY-WAN .

 You should not need extra NICs for this. The Mikrotik rules just need to
 distinguish the flows clearly.

 a) LAN-WAN dst port TCP/80 use gateway eth2
 b) *-WAN use gateway eth0
 c) *-Squid use gateway eth2
 d) *-LAN use gateway eth1


 NB: it says Your message to squid-users awaits moderator approval , Message
 body is too big ,for all my replays! so sorry for the delay.

 NP: We have a 40KB size limit on posts to these lists. Moderation for
 others and the moderators procrastinate.

 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mikrotik and Squid Transparent

2015-06-27 Thread Alex Samad
On 27 June 2015 at 16:33, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27/06/2015 10:02 a.m., Alex Samad wrote:
 Hi

 Sorry missing something here.

 I thought this was a mikrotek rtr , presumably acting as a default
 gateway for the local lan to the internet.
 it has a DNAT rule to capture all internet traffic that is port 80
 (and presumably at some point in time port 443) and it DNATS it to the
 SQUID box.

 and there needs to be a special rule on the DGW to allow squid access
 out to the internet with out resending it back to the squid and
 creating a loop.

 from memory thats how I used to do this. unless the DGW is large
 enough to run squid, then DNAT to the local box and onto squid.

 Yes, a lot of people used to do it that way. The problem was
 CVE-2009-0801 vulnerability allowed attackers script to send any request
 to Squid claiming an arbitrary server Host: header and get that content
 both delivered back as if it was to some other domain the client thought
 it was connecting to and injected into Squid cache for other clients to
 be affected by in the same way.

 That is no longer permitted since Squid-3.2. The DNAT can only happen
 once, and that must be on the Squid machine so Squid can lookup the NAT
 tables and unmangle the original dst-IP.

 You need to use routing rules on the Mikrotik (or tunnel sometimes works
 too) to deliver the original client generated packet to the Squid
 machine without NAT changing the dst-IP:port details (SNAT is fine, but
 will cause lies about client IP in the access.log).

Okay good to know.

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mikrotik and Squid Transparent

2015-06-26 Thread Alex Samad
Hi

Sorry missing something here.

I thought this was a mikrotek rtr , presumably acting as a default
gateway for the local lan to the internet.
it has a DNAT rule to capture all internet traffic that is port 80
(and presumably at some point in time port 443) and it DNATS it to the
SQUID box.

and there needs to be a special rule on the DGW to allow squid access
out to the internet with out resending it back to the squid and
creating a loop.

from memory thats how I used to do this. unless the DGW is large
enough to run squid, then DNAT to the local box and onto squid.

Why would there be a DoS for SQUID on another box, the only resources
I can think of is the NAT table, maybe conntrack

Alex



On 26 June 2015 at 22:49, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27/06/2015 12:14 a.m., Alex Samad wrote:
 aren't squid and nat box different ? that was my presumption..


 Best not to.

 The dst-IP:port on the TCP packets entering the Squid machine is where
 Squid will send the outgoing server requests. If that dst-IP is the IP
 of the Squid machine itself you get into big DoS-level trouble really fast.

 Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mikrotik and Squid Transparent

2015-06-26 Thread Alex Samad
aren't squid and nat box different ? that was my presumption..

On 25 June 2015 at 19:07, Amos Jeffries squ...@treenet.co.nz wrote:
 On 25/06/2015 12:45 p.m., Alex Samad wrote:
 Hi

 why this, doesn't this block all traffic getting to the squid port.
 iptables -t mangle -A PREROUTING -p tcp --dport $SQUIDPORT -j DROP

 All external traffic yes. The NAT interception happens afterward and works.

 The point is that NAT intercept MUST only be done directly on the Squid
 machine. A single external connection being accepted will result in a
 forwarding loop DoS and the above protects against that.



 what I would do to test is run tcpdump on the squid box and capture
 all traffic coming to it on the squid listening port,

 IIRC, you can't do that because tcpdump operates before NAT. It will not
 show you the NAT'ed traffic arriving.

 Running Squid with -X or debug_options ALL,9 would be better. You can
 see in cache.log what Squid is receiving and what the NAT de-mangling is
 actually doing.

 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mikrotik and Squid Transparent

2015-06-24 Thread Alex Samad
Hi

why this, doesn't this block all traffic getting to the squid port.
iptables -t mangle -A PREROUTING -p tcp --dport $SQUIDPORT -j DROP


what I would do to test is run tcpdump on the squid box and capture
all traffic coming to it on the squid listening port, then go to a
test machine on the eth or wireless and do a telnet google.com 80 and
see what you get on the squid box.

make sure you src and dst addresses are right. then check the squid logs.

I presume you get get to the internet from the squid box ?



On 24 June 2015 at 22:30, Dalmar maamul...@gmail.com wrote:
 squid 3.3.8 and ubuntu 15.04 server

 2015-06-24 15:04 GMT+03:00 Yuri Voinov yvoi...@gmail.com:

 Squid 3.5.x?

 24.06.15 18:03, Dalmar пишет:

 Hi,
 For over two weeks i am having a really headache in configuring squid
 transparent/intercept.
 I have tried different options and configurations but i couldn't get it to
 work.
 i think the problems lies in the Iptables / NAT but i really couldn't
 solve it.
 I have tried different iptable rules including the intercept linuxDnat -
 sysctl configuration, but didnt work.

 # your proxy IP
 SQUIDIP=X.X.X.X

 # your proxy listening port
 SQUIDPORT=


 iptables -t nat -A PREROUTING -s $SQUIDIP -p tcp --dport 80 -j ACCEPT
 iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination
 $SQUIDIP:$SQUIDPORT
 iptables -t nat -A POSTROUTING -j MASQUERADE
 iptables -t mangle -A PREROUTING -p tcp --dport $SQUIDPORT -j DROP


 i have to say that squid works well when i configure in the client
 browsers.

 at the mikrotik side, i am using DST-NAT chain port 80 pro TCP action
 DST-NAT to address squidIP and Port

 i am using ubuntu server 15.04 using squid 3.3.8 and this is my
 configuration and the errors i get:


 -- eth0 WAN - MAIN WAN Public IP Internet
  MK---|
-- eth1 LAN
   |
-- eth2 Proxy


  -- eth0 WAN --- Public IP -- Internet -- gets internet
 from 24online / another Mikrotik
Squid---|
 -- eth1 Proxy
|
 -- eth2 webmin -- For server Management


 -error1: if no intercept/transparent and no iptables is configured
 -Invalid URL -  The requested url could not be retrieved
 -but if proxy is configured in the user browser - it works!


 -error2:if intercept and iptable DNAT is configured
 -Access Denied and in the access log TCP-MISS/403
 -no forward proxy port configured
 -security alert : host header forgery detected on local=
 SquidIP:8080 remote:mikrotikIP (local ip does not match any domain name)
 -warning : forwarding loop detected (x-Forwarded-for mikrotik lan
 IP)

 squid.conf

 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost manager
 http_access deny manager
 http_access allow localnet
 http_access allow localhost
 http_access deny all
 http_port 8080
 http_port 8181
 cache_mem 2000 MB
 cache_dir ufs /var/spool/squid3 10 16 256
 coredump_dir /var/spool/squid3
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
 refresh_pattern . 0 20% 4320
 cache_effective_user proxy
 cache_effective_group proxy

 
 I am really confused, can anyone guide me please.
 Thanks in advance


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory usage question

2015-06-21 Thread Alex Samad
Hi

UFS or AUFS ? guessing aufs

Any suggestions on the L1 L2  values, defaults ?


On 21 June 2015 at 11:57, Amos Jeffries squ...@treenet.co.nz wrote:
 On 20/06/2015 9:08 p.m., Alex Samad wrote:
 Hi

 Are there any gotchas i need to look out for.
 Also I have allocated a 1T lun to the VM. Whats the best way to
 allocate this do I use 1 cache_dir or multiple cache_dir.

 The usual one UFS based dir per physical drive and no RAID. That can be
 tricky with SAN/NAS based disk and VMs.


 I currently have 3, is there a way to migrate the cache objects in the
 3 into 1 or do I just delete them and bear the cost of re downloading
 them

 If you need to merge them you can set cache_dir to read-only for a
 period. That way Squid will use their content until it becomes too far
 out of date, whiel storing new objects into the witable cache_dir.


 It should not be a big problem/cost to drop the cache anyway. The
 bandwidth to rebuild a cache is far smaller than most people expect.
 Much of the content in a large cache is stale objects waiting
 revalidation or replacement, and all HITs are duplicates by definition.
 The cache fill rate is an exponential/polynomial growth curve with the
 bulk of it being a few seconds/minutes worth of traffic.

 Amos

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory usage question

2015-06-20 Thread Alex Samad
Hi

Are there any gotchas i need to look out for.
Also I have allocated a 1T lun to the VM. Whats the best way to
allocate this do I use 1 cache_dir or multiple cache_dir.

I currently have 3, is there a way to migrate the cache objects in the
3 into 1 or do I just delete them and bear the cost of re downloading
them




On 19 June 2015 at 21:16, Eliezer Croitoru elie...@ngtech.co.il wrote:
 First goes first...
 Upgrade to 3.5 or 3.4 branch.
 Then try to use top or htop to get a snapshot of the virtual memory and
 resident memory that squid uses.

 Eliezer

 On 19/06/2015 13:19, Alex Samad wrote:

 this is on centos 6.6
 still using the redhat build squid !
 rpm -q squid
 squid-3.1.10-29.el6.x86_64



 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-availability and load-balancing between N squid servers

2015-06-09 Thread Alex Samad
Hi

I run 2 squid boxes, and I use pacemaker to float 2 VIP's between the 2 boxes.

Basically I just run squid on both and I create a VIP resource that
test if squid is running to allocate the VIP.

But this doesn't really give you load balancing. but very good resilience.


Pacemaker and Linux have the ability to do load balancing, by using a
share IP and some hashing algo , I haven't tested it though



On 9 June 2015 at 22:51, Amos Jeffries squ...@treenet.co.nz wrote:
 On 9/06/2015 7:15 p.m., Rafael Akchurin wrote:
 Hi Amos,

 snip

 There seems to be a bit of a myth going around about how HAProxy does
 load balancing. HAProxy is an HTTP layer proxy. Just like Squid.

 They both do the same things to received TCP connections. But HAProxy
 supports less HTTP features, so its somewhat simpler processing is also
 a bit faster when you want it to be a semi-dumb load balancer.

 We are somewhat recently added basic support for the PROXY protocol to 
 Squid.
 So HAProxy can relay port 80 connections to Squid-3.5+ without
 processing them fully. However Squid does not yet support that on
 https_port, which means the TLS connections still wont have client IP
 details passed through.

 So what would be your proposition for the case of SSL Bump?
 How to get the connecting client IP and authenticated user name passed to 
 the ICAP server when a cluster of squids somehow getting the CONNECT tunnel 
 established?

 Assume we left away the haproxy and rely solely on squid - how would you 
 approach this and how many instances of squid would you deploy?

 From my limited knowledge the FQDN proxy name being resolved to a number of 
 IP addresses running one squid per IP address is the simplest approach.


 Yes, it would seem to be the only form which meets all your criteria
 too. Everything else runs up against the HTTPS brick wall.

 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] netflix

2015-06-06 Thread Alex Samad
Hi

I remember seeing some rules for caching microsoft updates.  Is there
anything special to cache netflix ?

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] https_port question

2015-05-19 Thread Alex Samad
Hi

Looking at http://www.squid-cache.org/Doc/config/https_port/

I am trying to work out where I place intermediary CA certs.

I am setting up a reverse proxy setup, trying to terminate the SSL here.

cert=  points to SSL certificate PEM file, this seems to be a public
and private combo file. can I also place intermediary here ?

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] bandwidth limiting

2015-04-23 Thread Alex Samad
Hi

is there any way to limit the bandwidth squid uses to pull stuff from
the internet ?

Can it slow down request, delay acks or ??

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] tcp_outgoing_address

2015-04-15 Thread Alex Samad
Hi

I have squid-3.5.2-2.el6.x86_64 on centos 6.6

I am trying to direct certain destinations from certain ip addresses


acl viaTest dstdomain .abc.com

tcp_outgoing_address 192.168.11.11 viaTEst

This works well for

www.abc.com and test.abc.com when they resolve to ipv4 addresses
but when they resolved to ipv6 it fails :(

so I tried added

dns_v4_first on

but it doesn't seem to help :(

So am i right in presuming that because the name resolution happens
first and because it goes to IPv6 it will not going out with a src of
192.168.11.11

why doesn't the ipv4 first flag work ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] State of www1.ngtech.co.il

2015-04-09 Thread Alex Samad
What I found, was I couldn't yum install . yum update but I would
directly download the rpm with wget (with out a proxy as well !).
strange !



On 9 April 2015 at 16:47, Henri Wahl h.w...@ifw-dresden.de wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi list,
 does anybody know what is the matter with www1.ngtech.co.il? This is
 the source for RPM packages of squid but it seems to be dried up for
 some days now.
 Regards

 - --
 Henri Wahl

 IT Department
 Leibniz-Institut fuer Festkoerper- u.
 Werkstoffforschung Dresden

 tel: +49 (3 51) 46 59 - 797
 email: h.w...@ifw-dresden.de
 https://www.ifw-dresden.de

 Nagios status monitor Nagstamon: https://nagstamon.ifw-dresden.de

 DHCPv6 server dhcpy6d: https://dhcpy6d.ifw-dresden.de

 S/MIME: https://nagstamon.ifw-dresden.de/pubkeys/smime.pem
 PGP: https://nagstamon.ifw-dresden.de/pubkeys/pgp.asc

 IFW Dresden e.V., Helmholtzstrasse 20, D-01069 Dresden
 VR Dresden Nr. 1369
 Vorstand: Prof. Dr. Manfred Hennecke, Kaufmännische Direktorin i. V.
 Dipl.-Kffr. Friederike Jaeger
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iEYEARECAAYFAlUmIIEACgkQnmb3Nh+6CUIEYACcDuQKyYq7FIqA5Kr+Ykbf90k4
 bh8AnjYEaXryCQ8q/Ki2JOXHDyjyYALk
 =HhFG
 -END PGP SIGNATURE-
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help setting up hierarchy

2015-03-16 Thread Alex Samad
[snip]


 Config questions
 1) how to I get user authentication to flow through
   if a user requests from squid-a and it takes it from squid-b. I
 would like the user id's logged on both
   if a user requests from new squid to either squid-a or squid-b. I
 would like the auth (which would be done on new-squid) to flow through
 to either squid-a or squid-b.

 This is not possible with NTLM authentication.

 NTLM is authenticating the TCP connection between client and proxy
 underneath the HTTP layer and has a complex handshake setting up
 security token per-connection with the DC server. The TCP connection
 outbound from the proxy is a different connection, and also is not from
 the client.

 Its possible with Negotiate/Kerberos or Basic auth. Even though
 Negotiate is also authenticating the TCP connection the handshake is
 simpler and the token can be relayed to the peer proxy.

 NP: Though be careful in an environment using NTLM. You may get
 Negotiate/NTLM tokens flowing around, which wont work any more than NTLM
 does.

Sounds like the simplest thing to do is turn on authentication on all
the boxes and allow then non auth access to each other




 2) how do I setup ICP to work properly

 Use HTCP for better HIT ratio with less false positives in HTTP/1.1.

Ta, i will have to have a read, does it work. any examples on how to setup?


 3) is the cache_peer to squid-a squid-b from new-squid type parent ?

 No. But to get the authentication to work you will need login=PASSTHRU
 parameter (and be using Basic or Negotiate/Kerberos).
what if I just want the authenticated user id to flow through. So the
authentication happen on the office squid and then it forward to the
DC squid, the dc squid can log the user name in the user field is that
possible ?


 4) do I need to allow ICP clients full access, this is the squid-a to
 squid-b link ?

 You should not have to. However, it also should not matter - when the
 first proxy is doing auth you kow the traffic coming out of it is
 authenticated. Not doing auth twice is faster.

Is there a way to say any one attaching on port X doesn't need to be
authenticated but on port Y does.
My issues is that in the office I have a few eclipse users who had a
lot of problems with out previous proxy solution. they are setup to
use the office proxy in a nonauth way. but now I want to setup auth on
this squid box. I was thinking there could be a non auth port and a
auth port.

A



 Amos

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] help setting up hierarchy

2015-03-15 Thread Alex Samad
Hi

I have 2 squid boxes that exist in my 2 DC.

They are on the same vlan/ ip network and i use dns round robin

cache_peer other sibling 3128 3130 proxy-only

in  addition to this I added in


# ICP ALLOW
acl icp_allowed src 10.3.2.1/32  the ip of the other squid box to allow icp


http_access allow icp_allowed  need to allow this so that squid -a
can request from squid-b with out authenticating (do I need todo this)

icp_port 3130
icp_access allow icp_allowed
icp_access deny all

these are running squid-3.1.10-29.el6.x86_64

my new box (in the office) is running
squid-3.4.10-1.el6.x86_64

cache_peer squid-b parent 3128 0 weighted-round-robin weight=5
cache_peer squid-a parent 3128 0 weighted-round-robin weight=2

I had to turn on ICP I kept seeing error of not allowed !

We have authenticated access to the proxy, usually via ntlm so all
requests are logged against a user.

I do have some boxes that need unauthenticated access

Config questions
1) how to I get user authentication to flow through
  if a user requests from squid-a and it takes it from squid-b. I
would like the user id's logged on both
  if a user requests from new squid to either squid-a or squid-b. I
would like the auth (which would be done on new-squid) to flow through
to either squid-a or squid-b.
2)
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Interesting problem

2015-02-28 Thread Alex Samad
me  (Alex)?

forward proxy ?



On 27 February 2015 at 05:18, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 25/02/2015 06:18, Alex Samad wrote:

 Hi

 I am running squid on Centos 6.5
 squid-3.1.10-29.el6.x86_64


 Hey Mike,

 Can you share your squid.conf?

 It's unreal that you will have the feature you might want in 3.1.10.
 Are you trying to intercept ssl traffic or just use it as a reverse proxy?

 Eliezer

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Interesting problem

2015-02-24 Thread Alex Samad
Hi

I am running squid on Centos 6.5
squid-3.1.10-29.el6.x86_64

when I browse to https://www.quadriserv.com from IE or Chrome via the
squid proxy, it seems to corrupt the server cert.

when i browse to the site by passing squid it works fine.

I have tried wget from the squid box works fine also tried openssl s_client

openssl s_client -connect www.quadriserv.com:443 -showcerts /dev/null | less

-BEGIN CERTIFICATE-
MIIFyTCCBLGgAwIBAgIRAJfNWR72clr8JgXbvgA+uqgwDQYJKoZIhvcNAQEFBQAw
YjELMAkGA1UEBhMCVVMxITAfBgNVBAoTGE5ldHdvcmsgU29sdXRpb25zIEwuTC5D
LjEwMC4GA1UEAxMnTmV0d29yayBTb2x1dGlvbnMgQ2VydGlmaWNhdGUgQXV0aG9y
aXR5MB4XDTEzMTAyMjAwMDAwMFoXDTE4MDQxMjIzNTk1OVowgdMxCzAJBgNVBAYT
AlVTMQ4wDAYDVQQREwUxMDAxNzELMAkGA1UECBMCTlkxFjAUBgNVBAcTDU5ldyBZ
b3JrIENpdHkxEzARBgNVBAkTCjE0dGggRmxvb3IxFjAUBgNVBAkTDTUyOSBGaWZ0
aCBBdmUxFzAVBgNVBAoTDlF1YWRyaXNlcnYgSW5jMQswCQYDVQQLEwJJVDEhMB8G
A1UECxMYU2VjdXJlIExpbmsgU1NMIFdpbGRjYXJkMRkwFwYDVQQDFBAqLnF1YWRy
aXNlcnYuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3Oa12dWu
84WE2CeA0hVoFAw50+HpoB30Gi7uQ9/NK0A+gt8Igk5Vcwub6atldIiVc62k7v/9
DPZNoBxsOVopaTuDA54E6wnHEYve6VCr2xlQAnJEraIDZnvvQG/YnC8/ll44Yg06
MWVvMSug7oDSLhPPRX5ZjkQikpB6XKO1OhUUOJghUfo0YlG4I/8MBWpvJitaJOH9
pELBmepJFcpBvkij20Nk6MZu8kwzVs21Rp4FTEHpSH9Iagn7kw186nHqZkl+9D7e
UxM4IKc74j++Z2RjEPpPLLMcJYakD6kgkCUSkqiGmUS6R/4KBtbsE39lgJxNQDHU
Kqn5boHiyOjEZwIDAQABo4ICBjCCAgIwHwYDVR0jBBgwFoAUPEHijwgIqUwliY1t
xTjQ/IWMYhcwHQYDVR0OBBYEFCEGQeaf1tkMz9/3AA3y99GiCgzQMA4GA1UdDwEB
/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEF
BQcDAjB1BgNVHSAEbjBsMGAGDCsGAQQBhg4BAgEDATBQME4GCCsGAQUFBwIBFkJo
dHRwOi8vd3d3Lm5ldHdvcmtzb2x1dGlvbnMuY29tL2xlZ2FsL1NTTC1sZWdhbC1y
ZXBvc2l0b3J5LWNwcy5qc3AwCAYGZ4EMAQICMHoGA1UdHwRzMHEwNqA0oDKGMGh0
dHA6Ly9jcmwubmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zX0NBLmNybDA3
oDWgM4YxaHR0cDovL2NybDIubmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25z
X0NBLmNybDBzBggrBgEFBQcBAQRnMGUwPAYIKwYBBQUHMAKGMGh0dHA6Ly93d3cu
bmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zX0NBLmNydDAlBggrBgEFBQcw
AYYZaHR0cDovL29jc3AubmV0c29sc3NsLmNvbTAbBgNVHREEFDASghAqLnF1YWRy
aXNlcnYuY29tMA0GCSqGSIb3DQEBBQUAA4IBAQCsgRiTxwFDYa+3AZFzFj7XuhP3
LuEuI55Ppj0SwLfBjLeiHuQB616V536O1TWqbJGUc1KhXwiTh6kDFx5RXVGohV1f
qoaVFoKMkX+fVkG3VNjGmaqaZalweWRf0s6jMskWuSUQkWdADGnNCnqRxIrtyLfS
7/OHak+o2W0R+0jdsiUiLC7iZLzgpdFwHUa1wEVSjz2rCaI0TjEDkUKGfDITzZ9J
IY64c7QiYjzNF/PzlCIpL6zwPqnswLp25WOPM1jE4mqsK/9Z6Q0SWckk8WRTnlQA
YIbTFxXiY5fkkc4wdNNJZDv2R/nW9VkkK4u4qiJQ5Q5Y3iqHic+D3GZ2l2nT
-END CERTIFICATE-

seems to be okay

but the one thing I can't do it verify it. seems lilke
C=US, O=Network Solutions L.L.C., CN=Network Solutions Certificate Authority
is missing from my rootCA bundle.

would that be enough to cause this ?

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users