Re: [squid-users] Read Timeout

2015-03-13 Thread Amos Jeffries
On 14/03/2015 2:34 a.m., sci wrote:
> Hi,
> 
> I search a solution for the follow "problem":
> 
> A website with a simple button (POST) needs more than 15min to send the
> information what the clients needs (a simple excel sheet).
> We don't want to change the global settings at the "Read Timeout", so I
> try to find something like a ACL for the squid.
> 
> Is it possible to at a ACL to the squid.conf like:
> 
> # rule only for the slow website
> acl slow-server dstdomain .someserver-slow.com
> read_timeout 30 minutes slow-server
> # end rule
> 

15min is not exactly a Squid limit. Its just set with a default to match
other limits in the Internet.

A read happens (and the read_timeout restarts) whenever any TCP packets
arrive. At some point between 5 and 60min (usually lower values though)
with no packets at all occuring the TCP stacks and NAT systems all along
the network path of the connections start discarding their records about
the connection.

If the server is not responding with at least one packet within a
5-60min period the connection has a growing risk of not existing
anymore. Squid default lets it hang for 15min before considering it too
risky to use anymore and releases all the resources (and there are a LOT
of resources used down the whole chain of machinery between client and
server).

Don't worry about extending the timeout globally to 30min. It will have
no effect at all on servers or traffic that responds promptly to requests.


The best thing to do though is get the server fixed (ie complain to the
people responsible for the code generating those documents). Sometimes
they can fix teh total speed, sometimes not. But either way HTTP
contains several mechanisms allowing object to be streamed to the client
as they are generated, with error recovery and data validation even.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reverse Proxy Funny Logging Issue

2015-03-13 Thread Amos Jeffries
On 14/03/2015 5:19 a.m., dweimer wrote:
>>
>> Last night I applied the FreeBSD 10.1-RELEASE-p6 Update and Upgraded
>> the ports which included Squid 3.4.12, I enabled the LAX HTTP option
>> in the ports configuration with adds the --enable-http-violations
>> compile option. With the intention to enable broken_posts option in
>> the configuration. I will hopefully be able to apply any necessary
>> changes to the production system after I test them now.
>> When doing this update I did have a thought the system is running in a
>> FreeBSD jail and not on the base system is there a chance this issue
>> is caused by running within a jail? curious if anyone has ran into
>> specific issues running Squid in a FreeBSD jail before?
> 
> Well I am at a loss, debugging hasn't led to anything more than a
> timeout occurs. I was able to create a test PHP form to upload files on
> an Apache server and upload up to a 264MB file. I didn't try any larger
> files though I suspect it would work up to the configured 1024MB that I
> had Apache configured for. So its not all HTTPS only those files going
> to our OWA and Sharepoint servers. The only settings I can find that
> changes the behavior at all is to change the "write_timeout" to
> something smaller, like 45 seconds and then it errors sooner instead of
> taking forever to give up.
> 
> I tried uninstalling the Squid 3.4 FreeBSD port and using the 3.3 port
> instead on the test system, no change. I also tried installing 3.5 from
> source using the same configure options that my 3.4 port returned with
> squid -v, again no change.
> 
> I have verified that the IIS logs show a client request timeout has
> occurred, the broken_posts allow didn't create any change in behavior. I
> do know that if I point the browser directly to the Exchange server it
> works, so its only broken going through the reverse the proxy. If I
> point the browser through a forwarding Squid proxy that knows how to
> talk directly to the exchange server instead of the reverse proxy it
> works with no special settings. If I post a large debugging file to a
> website do I have any volunteers to look at it and see if they can see
> what's going on?
> 


It sounds like its not actually a Squid problem. There is enough code
churn between the major versions you tried that things should have
changed between them.


Could it be the end-of-line issue in Python?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i hope to use external ACL + ldap at squid 3.5.2, but i don't find ext_ldap_group_acl and basic_ldap_auth from /squid/libexec/

2015-03-13 Thread Amos Jeffries
On 14/03/2015 5:37 a.m., johnzeng wrote:
> 
> Hello All:
> 
> i hope to use external ACL + ldap at squid 3.5.2, but i don't find
> ext_ldap_group_acl and basic_ldap_auth from /squid/libexec/
> 
> if possible , please give me some advisement . Thanks
> 

You are missing the LDAP libraries needed to build them.

> This is my config
> 
> 
> configure options: '--prefix=/accerater/webcache3'
> '--enable-follow-x-forwarded-for' '--enable-snmp'
> '--enable-linux-netfilter' '--enable-storeio=aufs,rock'
> '--enable-wccpv2' '--with-large-files'
> '--enable-removal-policies=lru,heap' '--enable-async-io=128'
> '--enable-http-violations'

All of these ...

> '--enable-default-err-language=English'
> '--enable-err-languages=English' '--enable-referer-log'
> '--enable-useragent-log'

... to here are no longer existing otpions.

> '--with-maxfd=65536'
> '--enable-large-cache-files' '--enable-delay-pools'
> '--enable-forward-log' '--with-pthreads' 'LIBS=-ltcmalloc'
> '--disable-internal-dns'

Disable of interal DNS is no longer an existing option.

> '--enable-url-rewrite-helpers'
> '--enable-log-daemon-helpers' '--enable-epoll'
> '--enable-ltdl-convenience' '--with-included-ltdl'
> '--enable-disk-io=AIO,Blocking,DiskThreads,IpcIo,Mmapped'
> 
> This is full file at /squid/libexec
> 
> 
> basic_db_auth basic_ncsa_auth basic_smb_auth digest_file_auth
> ext_unix_group_acl log_file_daemon storeid_file_rewrite
> basic_fake_auth basic_nis_auth basic_smb_auth.sh ext_delayer_acl
> ext_wbinfo_group_acl negotiate_wrapper_auth unlinkd
> basic_getpwnam_auth basic_pop3_auth basic_smb_lm_auth
> ext_file_userip_acl helper-mux.pl ntlm_fake_auth url_fake_rewrite
> basic_msnt_multi_domain_auth basic_radius_auth cachemgr.cgi
> ext_sql_session_acl log_db_daemon ntlm_smb_lm_auth url_fake_rewrite.sh

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cento 6 repo

2015-03-13 Thread Amos Jeffries
On 14/03/2015 2:39 p.m., Alex Samad wrote:
> Hi
> 
> Quick on
> 
> squid.x86_647:3.4.10-1.el6  @squid
> squid-debuginfo.x86_64  7:3.4.10-1.el6  squid
> squid-helpers.x86_647:3.4.10-1.el6  squid
> squid-sysvinit.x86_64   7:3.4.3-1.el6   squid
> 
> the sysvinit is not up to date !
> 

The init script is not exactly changing much.

> 
> and
> yum install squid-helpers.x86_64
> Error: Package: 7:squid-helpers-3.4.10-1.el6.x86_64 (squid)
>Requires: perl(Crypt::OpenSSL::X509)
> 
> any chance to host the perl package at the same site ?

The build dependencies should all be available in mainstream
repositories. That reads to me like you are missing a particular perl
module (possible from the perl CPAN repository).

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] negotiate_wrapper: fgets() failed! dying..

2015-03-13 Thread Amos Jeffries
On 14/03/2015 4:19 p.m., Donny Vibianto wrote:
> hi markus,
> 
> unfortunately i move to centos but i still got same error after 9 hours
> running.
> 
> 2015/03/14 10:05:43| negotiate_wrapper: received Kerberos token
> 2015/03/14 10:05:43| negotiate_wrapper: Starting version 1.0.1
> 2015/03/14 10:05:43| negotiate_wrapper: NTLM command: /usr/bin/ntlm_auth
> --diagnostics --helper-protocol=squid-2.5-ntlmssp
> 2015/03/14 10:05:43| negotiate_wrapper: Kerberos command:
> /usr/lib64/squid/negotiate_kerberos_auth -k /etc/squid/PROXY.keytab -s
> GSS_C_NO_NAME
> FATAL: Received Bus Error...dying.

Aha! "Bus Error" is a memory failure in the machine.

The helper problems are just side effects of Squid having died and
abandoned it.


It could be the physical memory, or it could be Squid built with CPU
architecture optimizations for a different machine architecture than you
are running it on.

* Try using a Squid built with --disable-arch-native.

* If the probem remains, check your Squid 32-bit/64-bit type matches the
CPU architecture type.

* If its not those, its probably hardware issues.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] negotiate_wrapper: fgets() failed! dying..

2015-03-13 Thread Donny Vibianto
hi markus,

unfortunately i move to centos but i still got same error after 9 hours
running.

2015/03/14 10:05:43| negotiate_wrapper: received Kerberos token
2015/03/14 10:05:43| negotiate_wrapper: Starting version 1.0.1
2015/03/14 10:05:43| negotiate_wrapper: NTLM command: /usr/bin/ntlm_auth
--diagnostics --helper-protocol=squid-2.5-ntlmssp
2015/03/14 10:05:43| negotiate_wrapper: Kerberos command:
/usr/lib64/squid/negotiate_kerberos_auth -k /etc/squid/PROXY.keytab -s
GSS_C_NO_NAME
FATAL: Received Bus Error...dying.
2015/03/14 10:05:43 kid2| ctx: enter level  0: '
http://citrine.gundam-dc.com/lng/common/cocos/GameView_v2/res/Battle/radar/radar_b_ring.png?ver=1
'
2015/03/14 10:05:43 kid2| Closing HTTP port [::]:8000
2015/03/14 10:05:43 kid2| storeDirWriteCleanLogs: Starting...
2015/03/14 10:05:43 kid2|   Finished.  Wrote 0 entries.
2015/03/14 10:05:43 kid2|   Took 0.00 seconds (  0.00 entries/sec).
CPU Usage: 0.133 seconds = 0.066 user + 0.067 sys
Maximum Resident Size: 70224 KB
Page faults with physical i/o: 0
2015/03/14 10:05:43| negotiate_wrapper: fgets() failed! dying. errno=1
(Operation not permitted)
2015/03/14 10:05:43| negotiate_wrapper: fgets() failed! dying. errno=1
(Operation not permitted)

thanks for reply


donny


On Fri, Mar 13, 2015 at 3:43 AM, Markus Moeller 
wrote:

>   Do you get any more details when you start the wrapper with –d ?
>
> Markus
>
>  "Donny Vibianto"  wrote in message
> news:CAC49LV6SRXbiFcGxqZgAoaHPj1qeifERtSN63ZrDsa_b=iw...@mail.gmail.com...
>   anyone please...?
>
> On Sat, Mar 7, 2015 at 10:02 PM, Donny Vibianto 
> wrote:
>
>>  Hi Guys,
>>
>> After two weeks successful running several authentication in my
>> development environment with average 10-20 users, i encourage myself to put
>> in my production. it was up and ran with +-1000 users but only took 3-5
>> hours then squid suddenly stopped with error:
>>
>>  2015/03/06 15:07:59| negotiate_wrapper: fgets() failed! dying.
>> errno=1 (Operation not permitted)
>> 2015/03/06 15:07:59| negotiate_wrapper: fgets() failed! dying.
>> errno=1 (Operation not permitted)
>> 2015/03/06 15:07:59| negotiate_wrapper: fgets() failed! dying.
>> errno=1 (Operation not permitted)
>> 2015/03/06 15:07:59| negotiate_wrapper: Return 'AF
>> oYG2MIGzoAMKAQChCwYJKoZIhvcSAQICooGeBIGbYIGYBgkqhkiG9xIBAgICAG+BiDCBhaADAgEFoQMCAQ+ieTB3oAMCARKicARupdwIysaz6zjRSqsI8V4K0X67z4t5a9aOT7WPlyWRrp+1ol2zL6CYTcfZIyAq8q3D00mf+vpIeoiDDmkUkr+vXN+xkpXkWdX5pMD1hBrF4EDOL1RIp9XjpkdfIcEgg8Oia0Ay153sPK3+Tif4bGE=
>> RickyC@company.local
>> '
>> 2015/03/06 15:07:59| negotiate_wrapper: Return 'AF
>> oYG1MIGyoAMKAQChCwYJKoZIhvcSAQICooGdBIGaYIGXBgkqhkiG9xIBAgICAG+BhzCBhKADAgEFoQMCAQ+ieDB2oAMCARKibwRtX5xuxTxrgsKQpg3Y+kUXLOng15XJ7eDByao5YtNPZByv/zRtrz13QgKkCuk+VkXnCAzaii0ri4Mxvd+4BoskIrjf5FuPP3W59wMTCtkPJD85igR/OmQ4Ch09DJ51WGwnOizMuCW+9jg6EsFa1Q==
>> JanTS@company.local
>>
>> i use ubuntu server 14.04 with newest squid 3.5.2
>>
>>  Squid Cache: Version 3.5.2
>> Service Name: squid
>> configure options:  '--enable-build-info'
>> '--enable-removal-policies=lru,heap' '--enable-ltdl-install'
>> '--enable-storeio=ufs,aufs,rock' '--enable-auth-basic=LDAP'
>> '--enable-auth-negotiate=wrapper,kerberos'
>> '--enable-external-acl-helpers=LDAP_group' '--enable-translation'
>> '--enable-ssl-crtd' '--enable-gnuregex' '--enable-xmalloc-debug'
>> '--enable-xmalloc-debug-trace' '--enable-xmalloc-statistics'
>> '--enable-async-io' '--enable-icmp' '--enable-delay-pools'
>> '--enable-useragent-log' '--enable-kill-parent-hack' '--enable-htpc'
>> '--enable-forw-via-db' '--enable-cache-digests' '--enable-underscores'
>> '--enable-x-accelerator-vary' '--enable-esi' '--enable-inline'
>> '--enable-linux-netfilter' '--with-openssl' '--with-large-files'
>>
>> here is my squid.conf:
>>
>>  # = ACL Cachemgr
>> 
>> acl manager url_regex -i ^cache_object:// /squid-internal-mgr/
>> acl managerAdmin src "/usr/local/squid/etc/mgradmin.txt"
>> acl stream url_regex -i "/usr/local/squid/etc/stream"
>>
>> acl download url_regex -i "/usr/local/squid/etc/download"
>> acl whitelist url_regex -i "/usr/local/squid/etc/whitelist"
>> acl blacklist url_regex -i "/usr/local/squid/etc/blacklist"
>>
>> acl SSL_ports port 443
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl http proto http
>> acl CONNECT method CONNECT
>>
>> #  Authenticate using negotiate_wrapper
>> =
>> auth_param negotiate program
>> /usr/local/squid/libexec/negotiate_wrapper_auth -d --ntlm
>> /usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp
>> --kerberos /usr/local/squid/libexec

[squid-users] Cento 6 repo

2015-03-13 Thread Alex Samad
Hi

Quick on

squid.x86_647:3.4.10-1.el6  @squid
squid-debuginfo.x86_64  7:3.4.10-1.el6  squid
squid-helpers.x86_647:3.4.10-1.el6  squid
squid-sysvinit.x86_64   7:3.4.3-1.el6   squid

the sysvinit is not up to date !


and
yum install squid-helpers.x86_64
Error: Package: 7:squid-helpers-3.4.10-1.el6.x86_64 (squid)
   Requires: perl(Crypt::OpenSSL::X509)

any chance to host the perl package at the same site ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept config

2015-03-13 Thread Alberto Perez
Thanks a lot Yuri,
I made some merge with my config and some of this options, I will see now
how HIT rate it goes, my squid run so limited of bandwidth that I need to
be as much aggressive as I can caching the content.

Thanks again for sharing, very appreciated

Alberto

On Fri, Mar 13, 2015 at 4:01 PM, Yuri Voinov  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> This is know-how to himself. ;)
>
> To be serious,
>
> you must carefully play with refresh_pattern(s), and some squid.conf
> parameters (and also with store ID feature) to get higher HIT ratio.
>
> Just for example (this is NOT complete config! No responsibility or
> any guarantees in case of simple copy-n-pasted into your configs! This
> is AS IS example!):
>
> # Keep swf in cache even if asked not to
> refresh_pattern -i \.(swf)(\?|$)10080   90% 43200
>  override-expire
> ignore-reload reload-into-ims ignore-private
> # .NET cache
> refresh_pattern -i \.(as(h|p)x?)(\?|$)  10080   90% 43200
>  reload-into-ims
> # Updates: Windows, Adobe, Java
> refresh_pattern -i
> microsoft.com/.*\.(cab|exe|ms[i|u|f|p]|asf|wm[v|a]|dat|zip)
>4320
> 80% 43200   reload-into-ims
> refresh_pattern -i
> windowsupdate.com/.*\.(cab|exe|ms[i|u|f|p]|asf|wm[v|a]|dat|zip)
> 4320 80% 43200  reload-into-ims
> refresh_pattern -i
> my.windowsupdate.website.com/.*\.(cab|exe|ms[i|u|f|p]|asf|wm[v|a]|dat|zip)
> 4320 80% 43200  reload-into-ims
> refresh_pattern -i adobe.com/.*\.(zip|exe)  432080% 43200
>  reload-into-ims
> refresh_pattern -i java.com/.*\.(zip|exe)   432080% 43200
>  reload-into-ims
> refresh_pattern -i sun.com/.*\.(zip|exe)432080% 43200
>  reload-into-ims
> refresh_pattern -i google\.com.*\.(zip|exe) 432080% 43200
>  reload-into-ims
> refresh_pattern -i macromedia\.com.*\.(zip|exe) 432080% 43200
> reload-into-ims
> # Other long-lived items
> refresh_pattern -i
> \.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|webp|flv|mp4)(\?|$)
>   14400
> 99% 518400  ignore-no-store override-expire ignore-reload
> reload-into-ims ignore-private ignore-must-revalidate
> refresh_pattern -i
> \.((m?|x?|s?)htm(l?)|css|js|xml|php|json)(\?|$) 10080
>  90% 86400
> ignore-no-store override-expire override-lastmod reload-into-ims
> ignore-private ignore-must-revalidate
> # Default patterns
> refresh_pattern -i (/cgi-bin/|\?)   0   0%  0
> refresh_pattern .   0   20% 10080   override-lastmod
> reload-into-ims
>
> The example above also requires some additional cached-related
> parameters to be changed.
>
> Also, you strictly recommended to research average users activity AND
> play around VARY http headers.
>
> And others.
>
> Each squid setup is place-specific. And depending your access/deny
> lists, security policy, users/network activity etc.etc.etc.
>
> WBR, Yuri
>
> PS. Your question has NO simple answer. Beware - copy-n-paste any
> foreign config can not guarantee the same results for YOU.
>
> 14.03.15 1:52, Alberto Perez пишет:
> > Can you share more details about "Agressive dynamic content
> > caching requires some special tweaks" I am very interested.
> >
> > Thanks
> >
> >
> >
> > On 3/13/15, Yuri Voinov  wrote:
> >
> >
> > 13.03.15 23:33, Amos Jeffries пишет:
>  On 14/03/2015 5:47 a.m., Monah Baki wrote:
> 
>  
> 
> > half_closed_clients off quick_abort_min 0 KB
> > quick_abort_max 0 KB vary_ignore_expire on reload_into_ims
> > on memory_pools off cache_mem 4096 MB visible_hostname
> > isn-phc-cache minimum_object_size 0 bytes
> 
> > maximum_object_size 512 MB maximum_object_size 512 KB
> 
>  KB value overwriting MB value.
> 
> 
> > ipcache_size 1024 ipcache_low 90 ipcache_high 95
> > cache_swap_low 98 cache_swap_high 100 fqdncache_size 16384
> > retry_on_error on offline_mode off logfile_rotate 10
> > dns_nameservers 8.8.8.8 41.78.211.30
> >
> >
> >
> >
> > access.log:
> >
> > 1426267535.210198 10.0.0.23 TCP_MISS/200 412 GET
> > http://jadserve.postrelease.com/trk.gif? -
> > ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211
> > 198 10.0.0.23 TCP_MISS/200 412 GET
> > http://jadserve.postrelease.com/trk.gif? -
> > ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211
> > 198 10.0.0.23 TCP_MISS/200 412 GET
> > http://jadserve.postrelease.com/trk.gif? -
> > ORIGINAL_DST/54.225.133.227 image/gif 1426267535.223
> > 301 10.0.0.23 TCP_MISS/200 222 GET
> > http://rma-api.gravity.com/v1/beacons/log? -
> > ORIGINAL_DST/80.239.148.18 text/html 1426267535.244195
> > 10.0.0.23 TCP_MISS/200 412 GET
> > http://jadserve.postrelease.com/trk.gif? -
> > ORIGINAL_DST/54.225.133.227 image/gif
> 
> 
>  Lots of Akamai hosted requests. Akamai play tricks with DNS
>  responses.
> > In my installation I've used local Unbound DNS cache and, before
> > it, forced DNS interception to him

Re: [squid-users] squid intercept config

2015-03-13 Thread Yuri Voinov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This is know-how to himself. ;)

To be serious,

you must carefully play with refresh_pattern(s), and some squid.conf
parameters (and also with store ID feature) to get higher HIT ratio.

Just for example (this is NOT complete config! No responsibility or
any guarantees in case of simple copy-n-pasted into your configs! This
is AS IS example!):

# Keep swf in cache even if asked not to
refresh_pattern -i \.(swf)(\?|$)10080   90% 43200   override-expire
ignore-reload reload-into-ims ignore-private
# .NET cache
refresh_pattern -i \.(as(h|p)x?)(\?|$)  10080   90% 43200   reload-into-ims
# Updates: Windows, Adobe, Java
refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f|p]|asf|wm[v|a]|dat|zip) 
4320
80% 43200   reload-into-ims
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f|p]|asf|wm[v|a]|dat|zip) 
4320 80% 43200  reload-into-ims
refresh_pattern -i
my.windowsupdate.website.com/.*\.(cab|exe|ms[i|u|f|p]|asf|wm[v|a]|dat|zip)
4320 80% 43200  reload-into-ims
refresh_pattern -i adobe.com/.*\.(zip|exe)  432080% 43200   
reload-into-ims
refresh_pattern -i java.com/.*\.(zip|exe)   432080% 43200   
reload-into-ims
refresh_pattern -i sun.com/.*\.(zip|exe)432080% 43200   
reload-into-ims
refresh_pattern -i google\.com.*\.(zip|exe) 432080% 43200   
reload-into-ims
refresh_pattern -i macromedia\.com.*\.(zip|exe) 432080% 43200
reload-into-ims
# Other long-lived items
refresh_pattern -i
\.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|webp|flv|mp4)(\?|$)
14400
99% 518400  ignore-no-store override-expire ignore-reload
reload-into-ims ignore-private ignore-must-revalidate
refresh_pattern -i
\.((m?|x?|s?)htm(l?)|css|js|xml|php|json)(\?|$) 10080   90% 
86400
ignore-no-store override-expire override-lastmod reload-into-ims
ignore-private ignore-must-revalidate
# Default patterns
refresh_pattern -i (/cgi-bin/|\?)   0   0%  0
refresh_pattern .   0   20% 10080   override-lastmod reload-into-ims

The example above also requires some additional cached-related
parameters to be changed.

Also, you strictly recommended to research average users activity AND
play around VARY http headers.

And others.

Each squid setup is place-specific. And depending your access/deny
lists, security policy, users/network activity etc.etc.etc.

WBR, Yuri

PS. Your question has NO simple answer. Beware - copy-n-paste any
foreign config can not guarantee the same results for YOU.

14.03.15 1:52, Alberto Perez пишет:
> Can you share more details about "Agressive dynamic content
> caching requires some special tweaks" I am very interested.
> 
> Thanks
> 
> 
> 
> On 3/13/15, Yuri Voinov  wrote:
> 
> 
> 13.03.15 23:33, Amos Jeffries пишет:
 On 14/03/2015 5:47 a.m., Monah Baki wrote:
 
 
 
> half_closed_clients off quick_abort_min 0 KB
> quick_abort_max 0 KB vary_ignore_expire on reload_into_ims
> on memory_pools off cache_mem 4096 MB visible_hostname
> isn-phc-cache minimum_object_size 0 bytes
 
> maximum_object_size 512 MB maximum_object_size 512 KB
 
 KB value overwriting MB value.
 
 
> ipcache_size 1024 ipcache_low 90 ipcache_high 95
> cache_swap_low 98 cache_swap_high 100 fqdncache_size 16384
> retry_on_error on offline_mode off logfile_rotate 10
> dns_nameservers 8.8.8.8 41.78.211.30
> 
> 
> 
> 
> access.log:
> 
> 1426267535.210198 10.0.0.23 TCP_MISS/200 412 GET 
> http://jadserve.postrelease.com/trk.gif? - 
> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211
> 198 10.0.0.23 TCP_MISS/200 412 GET 
> http://jadserve.postrelease.com/trk.gif? - 
> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211
> 198 10.0.0.23 TCP_MISS/200 412 GET 
> http://jadserve.postrelease.com/trk.gif? - 
> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.223
> 301 10.0.0.23 TCP_MISS/200 222 GET 
> http://rma-api.gravity.com/v1/beacons/log? - 
> ORIGINAL_DST/80.239.148.18 text/html 1426267535.244195 
> 10.0.0.23 TCP_MISS/200 412 GET 
> http://jadserve.postrelease.com/trk.gif? - 
> ORIGINAL_DST/54.225.133.227 image/gif
 
 
 Lots of Akamai hosted requests. Akamai play tricks with DNS 
 responses.
> In my installation I've used local Unbound DNS cache and, before
> it, forced DNS interception to him with Cisco. :)
> 
> So, I don't care about any hosts DNS quirks. ;)
> 
 
 Check your cache.log for security warnings; 
 


 
Note that objects failing the Host validation are not cacheable.
 
 
> 1426267535.333423 10.0.0.23 TCP_MISS/200 1420 GET 
> http://hpr.outbrain.com/utils/get? -
> ORIGINAL_DST/50.31.185.42 text/x-json 1426267535.345412
> 10.0.0.23 TCP_MISS/200 11179 GET

Re: [squid-users] squid intercept config

2015-03-13 Thread Alberto Perez
Can you share more details about "Agressive dynamic content caching
requires some special tweaks" I am very interested.

Thanks



On 3/13/15, Yuri Voinov  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
>
> 13.03.15 23:33, Amos Jeffries пишет:
>> On 14/03/2015 5:47 a.m., Monah Baki wrote:
>>
>> 
>>
>>> half_closed_clients off quick_abort_min 0 KB quick_abort_max 0
>>> KB vary_ignore_expire on reload_into_ims on memory_pools off
>>> cache_mem 4096 MB visible_hostname isn-phc-cache
>>> minimum_object_size 0 bytes
>>
>>> maximum_object_size 512 MB maximum_object_size 512 KB
>>
>> KB value overwriting MB value.
>>
>>
>>> ipcache_size 1024 ipcache_low 90 ipcache_high 95 cache_swap_low
>>> 98 cache_swap_high 100 fqdncache_size 16384 retry_on_error on
>>> offline_mode off logfile_rotate 10 dns_nameservers 8.8.8.8
>>> 41.78.211.30
>>>
>>>
>>>
>>>
>>> access.log:
>>>
>>> 1426267535.210198 10.0.0.23 TCP_MISS/200 412 GET
>>> http://jadserve.postrelease.com/trk.gif? -
>>> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211198
>>> 10.0.0.23 TCP_MISS/200 412 GET
>>> http://jadserve.postrelease.com/trk.gif? -
>>> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211198
>>> 10.0.0.23 TCP_MISS/200 412 GET
>>> http://jadserve.postrelease.com/trk.gif? -
>>> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.223301
>>> 10.0.0.23 TCP_MISS/200 222 GET
>>> http://rma-api.gravity.com/v1/beacons/log? -
>>> ORIGINAL_DST/80.239.148.18 text/html 1426267535.244195
>>> 10.0.0.23 TCP_MISS/200 412 GET
>>> http://jadserve.postrelease.com/trk.gif? -
>>> ORIGINAL_DST/54.225.133.227 image/gif
>>
>>
>> Lots of Akamai hosted requests. Akamai play tricks with DNS
>> responses.
> In my installation I've used local Unbound DNS cache and, before it,
> forced DNS interception to him with Cisco. :)
>
> So, I don't care about any hosts DNS quirks. ;)
>
>>
>> Check your cache.log for security warnings;
>> 
>>
>> Note that objects failing the Host validation are not cacheable.
>>
>>
>>> 1426267535.333423 10.0.0.23 TCP_MISS/200 1420 GET
>>> http://hpr.outbrain.com/utils/get? - ORIGINAL_DST/50.31.185.42
>>> text/x-json 1426267535.345412 10.0.0.23 TCP_MISS/200 11179
>>> GET http://p.visualrevenue.com/? - ORIGINAL_DST/50.31.185.40
>>> text/javascript 1426267535.346411 10.0.0.23 TCP_MISS/200 423
>>> GET http://t1.visualrevenue.com/? - ORIGINAL_DST/64.74.232.44
>>> image/gif
>>
>> Not sure about them. Maybe genuine MISS, maybe not.
>
> Agressive dynamic content caching requires some special tweaks. ;)
>
>>
>> It could also be the issues Antony pointed out, with the objects
>> just naturally not being cacheable.
>>
>>
>>> 1426267535.363128 10.0.0.23 TCP_REFRESH_UNMODIFIED/304 327
>>> GET
>>> http://z.cdn.turner.com/cnn/.element/widget/video/videoapi/api/js/vendor/jquery.ba-bbq.js
>>>
>>>
> - - ORIGINAL_DST/80.239.152.153 application/x-javascript
>>
>> There is a hit.
>>
>> I guess you are new to Squid-3 ? Squid is HTTP/1.1 compliant now
>> and the caching rules are slightly different from requirements on
>> HTTP/1.0 software. A lot of content that previously could not be
>> stored now can (authenticated, private, no-cache, etc.). But being
>> sensitive info also requires revalidation in order to be used, so
>> they show up like the above.
>>
>> Amos
>>
>> ___ squid-users mailing
>> list squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBAgAGBQJVAy/qAAoJENNXIZxhPexGOUEH/2yt1ql+ndo1We1E06LvIZl7
> 4PXY1kzuHT6EpOYO9LpLKtE+dPNYJuHKiUEF2hAGz5DP/heKq8PFRBTkMD18sueN
> jm+UfP8BdxgRYuiQWtWNteV0gbH4nOBeJ6QwqlEHMwcsdPtkwWCGA0MS6co+IXKb
> poouP6xQoNddx/UKicu6PQZDj5HRmynTP2c0mJuFEdlQxONgFiP4mqSFBwWhH/B/
> hhdSfxg53xfQ+2B5TsVrKyxmJoIYpHgFZid/pk+Q2bb0WIy8bhHA72EHPjIu5K5Z
> wobLGng+oE0i2erqtZiFR8daGdKcRW7FDYzHi+LJEHJj3i+z0mRIQkGTn3Nxfhg=
> =Cnai
> -END PGP SIGNATURE-
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept config

2015-03-13 Thread Yuri Voinov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



13.03.15 23:33, Amos Jeffries пишет:
> On 14/03/2015 5:47 a.m., Monah Baki wrote:
> 
> 
> 
>> half_closed_clients off quick_abort_min 0 KB quick_abort_max 0
>> KB vary_ignore_expire on reload_into_ims on memory_pools off 
>> cache_mem 4096 MB visible_hostname isn-phc-cache 
>> minimum_object_size 0 bytes
> 
>> maximum_object_size 512 MB maximum_object_size 512 KB
> 
> KB value overwriting MB value.
> 
> 
>> ipcache_size 1024 ipcache_low 90 ipcache_high 95 cache_swap_low
>> 98 cache_swap_high 100 fqdncache_size 16384 retry_on_error on 
>> offline_mode off logfile_rotate 10 dns_nameservers 8.8.8.8
>> 41.78.211.30
>> 
>> 
>> 
>> 
>> access.log:
>> 
>> 1426267535.210198 10.0.0.23 TCP_MISS/200 412 GET 
>> http://jadserve.postrelease.com/trk.gif? -
>> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211198
>> 10.0.0.23 TCP_MISS/200 412 GET 
>> http://jadserve.postrelease.com/trk.gif? -
>> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.211198
>> 10.0.0.23 TCP_MISS/200 412 GET 
>> http://jadserve.postrelease.com/trk.gif? -
>> ORIGINAL_DST/54.225.133.227 image/gif 1426267535.223301
>> 10.0.0.23 TCP_MISS/200 222 GET 
>> http://rma-api.gravity.com/v1/beacons/log? -
>> ORIGINAL_DST/80.239.148.18 text/html 1426267535.244195
>> 10.0.0.23 TCP_MISS/200 412 GET 
>> http://jadserve.postrelease.com/trk.gif? -
>> ORIGINAL_DST/54.225.133.227 image/gif
> 
> 
> Lots of Akamai hosted requests. Akamai play tricks with DNS
> responses.
In my installation I've used local Unbound DNS cache and, before it,
forced DNS interception to him with Cisco. :)

So, I don't care about any hosts DNS quirks. ;)

> 
> Check your cache.log for security warnings; 
> 
> 
> Note that objects failing the Host validation are not cacheable.
> 
> 
>> 1426267535.333423 10.0.0.23 TCP_MISS/200 1420 GET 
>> http://hpr.outbrain.com/utils/get? - ORIGINAL_DST/50.31.185.42
>> text/x-json 1426267535.345412 10.0.0.23 TCP_MISS/200 11179
>> GET http://p.visualrevenue.com/? - ORIGINAL_DST/50.31.185.40
>> text/javascript 1426267535.346411 10.0.0.23 TCP_MISS/200 423
>> GET http://t1.visualrevenue.com/? - ORIGINAL_DST/64.74.232.44
>> image/gif
> 
> Not sure about them. Maybe genuine MISS, maybe not.

Agressive dynamic content caching requires some special tweaks. ;)

> 
> It could also be the issues Antony pointed out, with the objects
> just naturally not being cacheable.
> 
> 
>> 1426267535.363128 10.0.0.23 TCP_REFRESH_UNMODIFIED/304 327
>> GET 
>> http://z.cdn.turner.com/cnn/.element/widget/video/videoapi/api/js/vendor/jquery.ba-bbq.js
>>
>> 
- - ORIGINAL_DST/80.239.152.153 application/x-javascript
> 
> There is a hit.
> 
> I guess you are new to Squid-3 ? Squid is HTTP/1.1 compliant now
> and the caching rules are slightly different from requirements on
> HTTP/1.0 software. A lot of content that previously could not be
> stored now can (authenticated, private, no-cache, etc.). But being
> sensitive info also requires revalidation in order to be used, so
> they show up like the above.
> 
> Amos
> 
> ___ squid-users mailing
> list squid-users@lists.squid-cache.org 
> http://lists.squid-cache.org/listinfo/squid-users
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVAy/qAAoJENNXIZxhPexGOUEH/2yt1ql+ndo1We1E06LvIZl7
4PXY1kzuHT6EpOYO9LpLKtE+dPNYJuHKiUEF2hAGz5DP/heKq8PFRBTkMD18sueN
jm+UfP8BdxgRYuiQWtWNteV0gbH4nOBeJ6QwqlEHMwcsdPtkwWCGA0MS6co+IXKb
poouP6xQoNddx/UKicu6PQZDj5HRmynTP2c0mJuFEdlQxONgFiP4mqSFBwWhH/B/
hhdSfxg53xfQ+2B5TsVrKyxmJoIYpHgFZid/pk+Q2bb0WIy8bhHA72EHPjIu5K5Z
wobLGng+oE0i2erqtZiFR8daGdKcRW7FDYzHi+LJEHJj3i+z0mRIQkGTn3Nxfhg=
=Cnai
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept config

2015-03-13 Thread Amos Jeffries
On 14/03/2015 5:47 a.m., Monah Baki wrote:



> half_closed_clients off
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> vary_ignore_expire on
> reload_into_ims on
> memory_pools off
> cache_mem 4096 MB
> visible_hostname isn-phc-cache
> minimum_object_size 0 bytes

> maximum_object_size 512 MB
> maximum_object_size 512 KB

KB value overwriting MB value.


> ipcache_size 1024
> ipcache_low 90
> ipcache_high 95
> cache_swap_low 98
> cache_swap_high 100
> fqdncache_size 16384
> retry_on_error on
> offline_mode off
> logfile_rotate 10
> dns_nameservers 8.8.8.8 41.78.211.30
> 
> 
> 
> 
> access.log:
> 
> 1426267535.210198 10.0.0.23 TCP_MISS/200 412 GET
> http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
> image/gif
> 1426267535.211198 10.0.0.23 TCP_MISS/200 412 GET
> http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
> image/gif
> 1426267535.211198 10.0.0.23 TCP_MISS/200 412 GET
> http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
> image/gif
> 1426267535.223301 10.0.0.23 TCP_MISS/200 222 GET
> http://rma-api.gravity.com/v1/beacons/log? - ORIGINAL_DST/80.239.148.18
> text/html
> 1426267535.244195 10.0.0.23 TCP_MISS/200 412 GET
> http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
> image/gif


Lots of Akamai hosted requests. Akamai play tricks with DNS responses.

Check your cache.log for security warnings;
 

Note that objects failing the Host validation are not cacheable.


> 1426267535.333423 10.0.0.23 TCP_MISS/200 1420 GET
> http://hpr.outbrain.com/utils/get? - ORIGINAL_DST/50.31.185.42 text/x-json
> 1426267535.345412 10.0.0.23 TCP_MISS/200 11179 GET
> http://p.visualrevenue.com/? - ORIGINAL_DST/50.31.185.40 text/javascript
> 1426267535.346411 10.0.0.23 TCP_MISS/200 423 GET
> http://t1.visualrevenue.com/? - ORIGINAL_DST/64.74.232.44 image/gif

Not sure about them. Maybe genuine MISS, maybe not.

It could also be the issues Antony pointed out, with the objects just
naturally not being cacheable.


> 1426267535.363128 10.0.0.23 TCP_REFRESH_UNMODIFIED/304 327 GET
> http://z.cdn.turner.com/cnn/.element/widget/video/videoapi/api/js/vendor/jquery.ba-bbq.js
> - ORIGINAL_DST/80.239.152.153 application/x-javascript

There is a hit.

I guess you are new to Squid-3 ?
 Squid is HTTP/1.1 compliant now and the caching rules are slightly
different from requirements on HTTP/1.0 software. A lot of content that
previously could not be stored now can (authenticated, private,
no-cache, etc.). But being sensitive info also requires revalidation in
order to be used, so they show up like the above.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept config

2015-03-13 Thread Monah Baki
It's working now, all I did is rem'd the following:

# half_closed_clients off
# quick_abort_min 0 KB
# quick_abort_max 0 KB
# vary_ignore_expire on
# reload_into_ims on
# memory_pools off
# cache_mem 4096 MB
# # memory_cache_shared on
visible_hostname isn-phc-cache
minimum_object_size 0 bytes
maximum_object_size 512 MB
maximum_object_size 512 KB
ipcache_size 1024
# ipcache_low 90
# ipcache_high 95
cache_swap_low 98
cache_swap_high 100
# fqdncache_size 16384
# retry_on_error on
# offline_mode off
logfile_rotate 10
dns_nameservers 8.8.8.8 41.78.211.30

I can see tcp_hits.

Note to self, something I do not know, don't add it.


On Fri, Mar 13, 2015 at 1:23 PM, Amos Jeffries  wrote:

> On 14/03/2015 6:15 a.m., Antony Stone wrote:
> > On Friday 13 March 2015 at 17:47:44 (EU time), Monah Baki wrote:
> >>
> >> http_access allow localhost manager
> >> http_access deny manager
> >>
> >> #http_access deny to_localhost
> >>
> >> http_access allow localnet
> >> http_access allow localhost
> >
> > You've got the standard references here (and above, for cache manager
> access)
> > for localhost, and yet I don't see it defined anywhere - have you
> deliberately
> > removed it?
>
> Current Squid versions define those ACLs automatically.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept config

2015-03-13 Thread Amos Jeffries
On 14/03/2015 6:15 a.m., Antony Stone wrote:
> On Friday 13 March 2015 at 17:47:44 (EU time), Monah Baki wrote:
>>
>> http_access allow localhost manager
>> http_access deny manager
>>
>> #http_access deny to_localhost
>>
>> http_access allow localnet
>> http_access allow localhost
> 
> You've got the standard references here (and above, for cache manager access) 
> for localhost, and yet I don't see it defined anywhere - have you 
> deliberately 
> removed it?

Current Squid versions define those ACLs automatically.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept config

2015-03-13 Thread Antony Stone
On Friday 13 March 2015 at 17:47:44 (EU time), Monah Baki wrote:

> acl localnet src 10.0.0.0/8# RFC1918 possible internal network
> acl localnet src 172.16.0.0/12# RFC1918 possible internal network
> acl localnet src 192.168.0.0/16# RFC1918 possible internal network
> acl localnet src fc00::/7   # RFC 4193 local private network range
> acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
> machines
> 
> acl SSL_ports port 443
> acl Safe_ports port 80# http
> acl Safe_ports port 21# ftp
> acl Safe_ports port 443# https
> acl Safe_ports port 70# gopher
> acl Safe_ports port 210# wais
> acl Safe_ports port 1025-65535# unregistered ports
> acl Safe_ports port 280# http-mgmt
> acl Safe_ports port 488# gss-http
> acl Safe_ports port 591# filemaker
> acl Safe_ports port 777# multiling http
> acl CONNECT method CONNECT
> 
> http_access deny !Safe_ports
> 
> http_access deny CONNECT !SSL_ports
> 
> http_access allow localhost manager
> http_access deny manager
> 
> #http_access deny to_localhost
> 
> http_access allow localnet
> http_access allow localhost

You've got the standard references here (and above, for cache manager access) 
for localhost, and yet I don't see it defined anywhere - have you deliberately 
removed it?

> http_access deny all
> 
> http_port 3128
> http_port 3129 intercept
> 
> cache_dir ufs /usr/local/squid/var/cache/squid 35 16 256
> 
> 
> refresh_pattern ^ftp:144020%10080
> refresh_pattern ^gopher:14400%1440
> refresh_pattern -i (/cgi-bin/|\?) 00%0
> refresh_pattern .020%4320
> 
> half_closed_clients off
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> vary_ignore_expire on
> reload_into_ims on
> memory_pools off
> cache_mem 4096 MB
> visible_hostname isn-phc-cache
> minimum_object_size 0 bytes
> maximum_object_size 512 MB
> maximum_object_size 512 KB
> ipcache_size 1024
> ipcache_low 90
> ipcache_high 95
> cache_swap_low 98
> cache_swap_high 100
> fqdncache_size 16384
> retry_on_error on
> offline_mode off
> logfile_rotate 10
> dns_nameservers 8.8.8.8 41.78.211.30

> access.log:
> 
> 1426267535.210198 10.0.0.23 TCP_MISS/200 412 GET
> http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
> image/gif

I see quote a lot of entries in your access log for things to do with advert 
servers.  Are you certain that these objects haven't been marked by the server 
as "nocache" or similar?

Try accessing something simple and plain, such as the squid project home page 
http://www.squid-cache.org/ and see what shows up in your access log.

Also, try configuring a browser to use the proxy listening on port 3128 and see 
if that starts showing you cache hits.


Regards,


Antony.

-- 
Late in 1972 President Richard Nixon announced that the rate of increase of 
inflation was decreasing.   This was the first time a sitting president used a 
third derivative to advance his case for re-election.

 - Hugo Rossi, Notices of the American Mathematical Society

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept config

2015-03-13 Thread Monah Baki
#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 3129 intercept

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /usr/local/squid/var/cache/squid 35 16 256


#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320

half_closed_clients off
quick_abort_min 0 KB
quick_abort_max 0 KB
vary_ignore_expire on
reload_into_ims on
memory_pools off
cache_mem 4096 MB
visible_hostname isn-phc-cache
minimum_object_size 0 bytes
maximum_object_size 512 MB
maximum_object_size 512 KB
ipcache_size 1024
ipcache_low 90
ipcache_high 95
cache_swap_low 98
cache_swap_high 100
fqdncache_size 16384
retry_on_error on
offline_mode off
logfile_rotate 10
dns_nameservers 8.8.8.8 41.78.211.30




access.log:

1426267535.210198 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.211198 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.211198 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.223301 10.0.0.23 TCP_MISS/200 222 GET
http://rma-api.gravity.com/v1/beacons/log? - ORIGINAL_DST/80.239.148.18
text/html
1426267535.244195 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.333423 10.0.0.23 TCP_MISS/200 1420 GET
http://hpr.outbrain.com/utils/get? - ORIGINAL_DST/50.31.185.42 text/x-json
1426267535.345412 10.0.0.23 TCP_MISS/200 11179 GET
http://p.visualrevenue.com/? - ORIGINAL_DST/50.31.185.40 text/javascript
1426267535.346411 10.0.0.23 TCP_MISS/200 423 GET
http://t1.visualrevenue.com/? - ORIGINAL_DST/64.74.232.44 image/gif
1426267535.363128 10.0.0.23 TCP_REFRESH_UNMODIFIED/304 327 GET
http://z.cdn.turner.com/cnn/.element/widget/video/videoapi/api/js/vendor/jquery.ba-bbq.js
- ORIGINAL_DST/80.239.152.153 application/x-javascript
1426267535.381193 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.406189 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.408190 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.408191 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.418200 10.0.0.23 TCP_MISS/200 412 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.437188 10.0.0.23 TCP_MISS/200 431 GET
http://jadserve.postrelease.com/trk.gif? - ORIGINAL_DST/54.225.133.227
image/gif
1426267535.464128 10.0.0.23 TCP_REFRESH_UNMODIFIED/304 327 GET
http

[squid-users] i hope to use external ACL + ldap at squid 3.5.2, but i don't find ext_ldap_group_acl and basic_ldap_auth from /squid/libexec/

2015-03-13 Thread johnzeng

Hello All:

i hope to use external ACL + ldap at squid 3.5.2, but i don't find
ext_ldap_group_acl and basic_ldap_auth from /squid/libexec/

if possible , please give me some advisement . Thanks

This is my config


configure options: '--prefix=/accerater/webcache3'
'--enable-follow-x-forwarded-for' '--enable-snmp'
'--enable-linux-netfilter' '--enable-storeio=aufs,rock'
'--enable-wccpv2' '--with-large-files'
'--enable-removal-policies=lru,heap' '--enable-async-io=128'
'--enable-http-violations' '--enable-default-err-language=English'
'--enable-err-languages=English' '--enable-referer-log'
'--enable-useragent-log' '--with-maxfd=65536'
'--enable-large-cache-files' '--enable-delay-pools'
'--enable-forward-log' '--with-pthreads' 'LIBS=-ltcmalloc'
'--disable-internal-dns' '--enable-url-rewrite-helpers'
'--enable-log-daemon-helpers' '--enable-epoll'
'--enable-ltdl-convenience' '--with-included-ltdl'
'--enable-disk-io=AIO,Blocking,DiskThreads,IpcIo,Mmapped'

This is full file at /squid/libexec


basic_db_auth basic_ncsa_auth basic_smb_auth digest_file_auth
ext_unix_group_acl log_file_daemon storeid_file_rewrite
basic_fake_auth basic_nis_auth basic_smb_auth.sh ext_delayer_acl
ext_wbinfo_group_acl negotiate_wrapper_auth unlinkd
basic_getpwnam_auth basic_pop3_auth basic_smb_lm_auth
ext_file_userip_acl helper-mux.pl ntlm_fake_auth url_fake_rewrite
basic_msnt_multi_domain_auth basic_radius_auth cachemgr.cgi
ext_sql_session_acl log_db_daemon ntlm_smb_lm_auth url_fake_rewrite.sh
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reverse Proxy Funny Logging Issue

2015-03-13 Thread dweimer

On 03/12/2015 10:31 am, dweimer wrote:

On 01/23/2013 10:39 pm, Amos Jeffries wrote:

On 24/01/2013 4:13 a.m., dweimer wrote:

On 2013-01-23 08:40, dweimer wrote:

On 2013-01-22 23:30, Amos Jeffries wrote:

On 23/01/2013 5:34 a.m., dweimer wrote:
I just upgraded my reverse proxy server last night from 3.1.20 to 
3.2.6, all is working well except one of my log rules, and I can't 
figure out why.


Please run "squid -k parse" and resolve the WARNING or ERROR which 
are listed.


There are two possible reasons...



I have a several sites behind the server, with dstdomain access 
rules setup.


acl website1 dstdomain www.website1.com
acl website2 dstdomain www.website2.com
acl website2 dstdomain www.website3.com


Possible reason #1 (assuming thi is an accurate copy-n-paste from
yoru config file).  you have no website3 ACL definition?


That was a typo in the email, correct ACL is in the configuration,
squid -k parse outputs no warnings or errors.




...

Followed by the access rules

http_access allow website1
http_access allow website2
http_access allow website3
...
http_access deny all

Some are using rewrites
url_rewrite_program /usr/local/etc/squid/url_rewrite.py
url_rewrite_children 20
url_rewrite_access allow website1
url_rewrite_access allow website3
...
url_rewrite_access deny all

Then my access logs

# First I grab everything in one
access_log daemon:/var/log/squid/access.log squid all





access_log daemon:/var/log/squid/website1.log combined website1
access_log daemon:/var/log/squid/website2.log combined website2
access_log daemon:/var/log/squid/website3.log combined website3
...

everything works, write down to one of the access logs, the data 
shows up in the access.log file, the data shows up in the 
individual logs for all the others, except that one.  If we use 
website3 from the above example like my actual file the access 
rule works on the url_rewrite_access allow line, but for some 
reason is failing on the log line.  squid -k parse doesn't show 
any errors, and shows a Processing: access_log 
daemon:/var/log/squid/website3.log combined website3 line in the 
output.


The log in question was originally at the end of my access_log 
list section, so I changed the order around to see if for some 
reason it was only the last one not working, no change still only 
that one not working, And the new last one in the list still works 
as expected.


I know the ACL is working as it works correctly on the rewrite 
rule and the http access just above the log rules, anyone have any 
ideas on how I can figure out why the log entry isn't working?




Changed lines back to daemon, changed acl on logs to the rewrite side 
used on the cache_peer_access lines later in the configuration.  
Works now, and logs even show up with the pre-rewrite rule host 
information...


That does make me wonder why some lines were getting logged but not 
all, the sites I thought were working do have higher usage, maybe I 
was still missing a lot from them, and just not knowing it.  I guess 
I will see if my webalizer reports show a huge gain in hit count over 
the old records from the the 3.1.20 installation, of if this behavior 
is only evident in the 3.2 branch.




I think you will find that the lines being logged previously were on
the requests which were either not rewritten at all or were re-written
from another requests URL which was being logged.

Each of the ACL-driven directive labels in squid.conf is effectively
an event trigger script - deciding whether or not to perform some
action. This only makes sense testing when that action choice is
requried.  Squid processing pathway checks http_access first, ... then
some others, ... then url_rewriting, ... then the destination
selection (cache_peer and others), ... then when the transaction is
fully completed access_log output decision are done.

Amos


Last night I applied the FreeBSD 10.1-RELEASE-p6 Update and Upgraded
the ports which included Squid 3.4.12, I enabled the LAX HTTP option
in the ports configuration with adds the --enable-http-violations
compile option. With the intention to enable broken_posts option in
the configuration. I will hopefully be able to apply any necessary
changes to the production system after I test them now.
When doing this update I did have a thought the system is running in a
FreeBSD jail and not on the base system is there a chance this issue
is caused by running within a jail? curious if anyone has ran into
specific issues running Squid in a FreeBSD jail before?


Well I am at a loss, debugging hasn't led to anything more than a 
timeout occurs. I was able to create a test PHP form to upload files on 
an Apache server and upload up to a 264MB file. I didn't try any larger 
files though I suspect it would work up to the configured 1024MB that I 
had Apache configured for. So its not all HTTPS only those files going 
to our OWA and Sharepoint servers. The only settings I can find that 
changes the behavior at all is to change t

Re: [squid-users] squid intercept config

2015-03-13 Thread Yuri Voinov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



13.03.15 21:58, Monah Baki пишет:
> Hi All,
> 
> Installed squid on CentOS 6.6 and it's working, but mY access.log
> shows all TCP_MISS and no TCP_HIT. The following config:
> 
> squid.conf # Squid normally listens to port 3128 http_port 3128 
> http_port 3129 intercept

And that's all

> 
> 
> 
> iptables
> 
> # Generated by iptables-save v1.4.7 on Fri Mar 13 16:04:02 2015 
> *nat :PREROUTING ACCEPT [10:2031] :POSTROUTING ACCEPT [0:0] :OUTPUT
> ACCEPT [0:0] -A PREROUTING -s 147.245.252.13/32 -p tcp -m tcp
> --dport 80 -j ACCEPT -A PREROUTING -s 10.0.0.24/32 -p tcp -m tcp
> --dport 80 -j ACCEPT -A PREROUTING -s 147.245.252.13/32 -p tcp -m
> tcp --dport 80 -j ACCEPT -A PREROUTING -p tcp -m tcp --dport 80 -j
> REDIRECT --to-ports 3129 -A POSTROUTING -j MASQUERADE COMMIT #
> Completed on Fri Mar 13 16:04:02 2015 # Generated by iptables-save
> v1.4.7 on Fri Mar 13 16:04:02 2015 *filter :INPUT ACCEPT [0:0] 
> :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1818:649971] -A INPUT -m
> state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j
> REJECT --reject-with icmp-port-unreachable -A INPUT -i lo -j
> ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j
> ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 3129 -m state
> --state NEW,ESTABLISHED -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp
> --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -j
> REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT
> --reject-with icmp-host-prohibited COMMIT # Completed on Fri Mar 13
> 16:04:02 2015 # Generated by iptables-save v1.4.7 on Fri Mar 13
> 16:04:02 2015 *mangle :PREROUTING ACCEPT [68:6199] :INPUT ACCEPT
> [68:6199] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [26:3064] 
> :POSTROUTING ACCEPT [26:3064] -A PREROUTING -p tcp -m tcp --dport
> 3129 -j DROP COMMIT # Completed on Fri Mar 13 16:04:02 2015
> 
> 
> Accessing sites, shows the IP address of the proxy 147.245.252.13.
> 
> Am I missing something in IPTables that it is not caching?
> 
> 
> Thanks Monah
> 
> On Fri, Mar 6, 2015 at 11:26 PM, Amos Jeffries
>  wrote:
> 
>> On 6/03/2015 1:19 a.m., Monah Baki wrote:
>>> Hi all, can anyone verify if this is correct, need to make ure
>>> that users will be able to access the internet via the squid.
>>> 
>>> Running FreeBSD with a single interface with Squid-3.5.2
>>> 
>>> Policy based routing on Cisco with the following:
>>> 
>>> 
>>> interface GigabitEthernet0/0/1.1
>>> 
>>> encapsulation dot1Q 1 native
>>> 
>>> ip address 10.0.0.9 255.255.255.0
>>> 
>>> no ip redirects
>>> 
>>> no ip unreachables
>>> 
>>> ip nat inside
>>> 
>>> standby 1 ip 10.0.0.10
>>> 
>>> standby 1 priority 120
>>> 
>>> standby 1 preempt
>>> 
>>> standby 1 name HSRP
>>> 
>>> ip policy route-map CFLOW
>>> 
>>> 
>>> 
>>> ip access-list extended REDIRECT
>>> 
>>> deny   tcp host 10.0.0.24 any eq www
>>> 
>>> permit tcp host 10.0.0.23 any eq www
>>> 
>>> 
>>> 
>>> route-map CFLOW permit 10
>>> 
>>> match ip address REDIRECT set ip next-hop 10.0.0.24
>>> 
>>> In my /etc/pf.conf rdr pass inet proto tcp from 10.0.0.0/8 to
>>> any port 80 -> 10.0.0.24 port 3129
>>> 
>>> # block in pass in log quick on bge0 pass out log quick on
>>> bge0 pass out keep state
>>> 
>>> and finally in my squid.conf: http_port 3128 http_port 3129
>>> intercept
>>> 
>>> 
>>> 
>>> And for testing purposes from the squid server: ./squidclient
>>> -h 10.0.0.24 -p 3128 http://www.freebsd.org/
>>> 
>>> If I replace -p 3128 with -p 80, I get a access denied, and if
>>> I omit the -p 3128 completely, I can access the websites.
>> 
>> If you omit the -p entirely squidclient assumes "-p 3128" (the
>> proxy default listening port), so it works exactly the same as if
>> you had used -p 3128 explicitly.
>> 
>> If you use -p 80 you also need to change the pther parameters so
>> they generate port-80 syntax message: - the -h with IP or
>> hostname of the remote web server, and - the URL parameters being
>> a relative URL, and - the -j parameter with Host: header domain
>> name of the server ... eg. squidclient -h www.freebsd.org -j
>> www.freebsd.org -p 80 /
>> 
>> NP: if your squidclient is too old to support -j, use this
>> instead: -H 'Host: www.freebsd.org\n'
>> 
>> ** this test should work from the squid box without having gone
>> through the proxy. Only from the client machine should it work
>> *with* NAT passing it through the proxy.
>> 
>> 
>> 
>> Using a proxy syntax message sent directly to the proxy receiving
>> port, or with the proxy as receiving IP on port 80 (NAT'ed to
>> Squid) is a guaranted forwarding loop failure.
>> 
>> 
>> That doesn't fix your clients issue, but hopefully makes it clear
>> that the above desribed test is broken enough to prevent you
>> identifying when the client issue is fixed if that happens on
>> some change.
>> 
>> Amos ___ squid-users
>> mailing list squid-users@lists.squid-cache.org 
>> http://lists.squid-cache.org/listinfo/squid-users
>> 
> 

Re: [squid-users] squid intercept config

2015-03-13 Thread Monah Baki
Hi All,

Installed squid on CentOS 6.6 and it's working, but mY access.log shows all
TCP_MISS and no TCP_HIT. The following config:

squid.conf
# Squid normally listens to port 3128
http_port 3128
http_port 3129 intercept



iptables

# Generated by iptables-save v1.4.7 on Fri Mar 13 16:04:02 2015
*nat
:PREROUTING ACCEPT [10:2031]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -s 147.245.252.13/32 -p tcp -m tcp --dport 80 -j ACCEPT
-A PREROUTING -s 10.0.0.24/32 -p tcp -m tcp --dport 80 -j ACCEPT
-A PREROUTING -s 147.245.252.13/32 -p tcp -m tcp --dport 80 -j ACCEPT
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3129
-A POSTROUTING -j MASQUERADE
COMMIT
# Completed on Fri Mar 13 16:04:02 2015
# Generated by iptables-save v1.4.7 on Fri Mar 13 16:04:02 2015
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1818:649971]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j REJECT --reject-with icmp-port-unreachable
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 3129 -m state --state
NEW,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED
-j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Fri Mar 13 16:04:02 2015
# Generated by iptables-save v1.4.7 on Fri Mar 13 16:04:02 2015
*mangle
:PREROUTING ACCEPT [68:6199]
:INPUT ACCEPT [68:6199]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [26:3064]
:POSTROUTING ACCEPT [26:3064]
-A PREROUTING -p tcp -m tcp --dport 3129 -j DROP
COMMIT
# Completed on Fri Mar 13 16:04:02 2015


Accessing sites, shows the IP address of the proxy 147.245.252.13.

Am I missing something in IPTables that it is not caching?


Thanks
Monah

On Fri, Mar 6, 2015 at 11:26 PM, Amos Jeffries  wrote:

> On 6/03/2015 1:19 a.m., Monah Baki wrote:
> > Hi all, can anyone verify if this is correct, need to make ure that users
> > will be able to access the internet via the squid.
> >
> > Running FreeBSD with a single interface with Squid-3.5.2
> >
> > Policy based routing on Cisco with the following:
> >
> >
> > interface GigabitEthernet0/0/1.1
> >
> > encapsulation dot1Q 1 native
> >
> > ip address 10.0.0.9 255.255.255.0
> >
> > no ip redirects
> >
> > no ip unreachables
> >
> > ip nat inside
> >
> > standby 1 ip 10.0.0.10
> >
> > standby 1 priority 120
> >
> > standby 1 preempt
> >
> > standby 1 name HSRP
> >
> > ip policy route-map CFLOW
> >
> >
> >
> > ip access-list extended REDIRECT
> >
> > deny   tcp host 10.0.0.24 any eq www
> >
> > permit tcp host 10.0.0.23 any eq www
> >
> >
> >
> > route-map CFLOW permit 10
> >
> > match ip address REDIRECT
> > set ip next-hop 10.0.0.24
> >
> > In my /etc/pf.conf
> > rdr pass inet proto tcp from 10.0.0.0/8 to any port 80 -> 10.0.0.24 port
> > 3129
> >
> > # block in
> > pass in log quick on bge0
> > pass out log quick on bge0
> > pass out keep state
> >
> > and finally in my squid.conf:
> > http_port 3128
> > http_port 3129 intercept
> >
> >
> >
> > And for testing purposes from the squid server:
> >  ./squidclient -h 10.0.0.24 -p 3128 http://www.freebsd.org/
> >
> > If I replace -p 3128 with -p 80, I get a access denied, and if I omit the
> > -p 3128 completely, I can access the websites.
>
> If you omit the -p entirely squidclient assumes "-p 3128" (the proxy
> default listening port), so it works exactly the same as if you had used
> -p 3128 explicitly.
>
> If you use -p 80 you also need to change the pther parameters so they
> generate port-80 syntax message:
>  - the -h with IP or hostname of the remote web server, and
>  - the URL parameters being a relative URL, and
>  - the -j parameter with Host: header domain name of the server
> ...
>  eg.
>  squidclient -h www.freebsd.org -j www.freebsd.org -p 80 /
>
> NP: if your squidclient is too old to support -j, use this instead:
>   -H 'Host: www.freebsd.org\n'
>
>  ** this test should work from the squid box without having gone through
> the proxy. Only from the client machine should it work *with* NAT
> passing it through the proxy.
>
>
>
> Using a proxy syntax message sent directly to the proxy receiving port,
> or with the proxy as receiving IP on port 80 (NAT'ed to Squid) is a
> guaranted forwarding loop failure.
>
>
> That doesn't fix your clients issue, but hopefully makes it clear that
> the above desribed test is broken enough to prevent you identifying when
> the client issue is fixed if that happens on some change.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 2.7, 3.4 and 3.5 Videos/Music/Images/Libraris/CDNs Booster

2015-03-13 Thread Stakres
Hi All,

Advanced Caching Add-On for Linux Squid Proxy Cache v2.7, v3.4 and v3.5 with
Videos, Music, Images, Libraries and CDNs.

New  version 2.39    -
March 13th 2015.
- New websites
- Tiny bugs fixed
More details on  https://svb.unveiltech.com   

Enjoy 

Bye Fred 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-3-4-and-3-5-Videos-Music-Images-Libraries-CDNs-Booster-tp4668683p4670396.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] peek/splice working with lynx but not with firefox or chrome [SOLVED]

2015-03-13 Thread john jacob
Hi,

I am also having similar environment with squid (version 3.5.2
-20150218-r13758) and openssl 1.0.1k, but for me only small number of https
sites are working with peek and splice. For eg:- , I can access
https://www.google.com but not https://ssllabs.com and lot of other https
domains, giving "Error negotiating SSL on FD 15: error:140920E3:SSL
routines:SSL3_GET_SERVER_HELLO:parse tlsext (1/-1/0) " in the cache.log
file.

Also I could see a bunch of other error messages in the cache.log files
relating to openssl (like "Error negotiating SSL on FD 21:
error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early (1/-1/0)" ,
"Error verifying certificates " etc)  when tried to access sites like
https://www.facebook.com, https://www.yahoo.com etc

Squid is running on a CentOS 7 x64 box and Workstation is Win7 with Firefox
and Chrome. I tried configuring openssl with disabling certain options with
no-nextprotoneg  and no-ec as well as with recent openssl version1.0.2 ,
but without any success.

Below is my squid config file.

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

ssl_bump peek all
ssl_bump splice all

# Squid normally listens to port 3128
http_port :3128
http_port :3129 intercept
https_port :3130 intercept ssl-bump
cert=/tmp/sslcertificates/server.cert.pem
key=/tmp/sslcertificates/server.key.pem

Does this has to do anything specific to my environment or the config
options? Any help on this is highly appreciated.

Thanks in advance,
John

On Tue, Mar 10, 2015 at 10:42 PM, Roel van Meer  wrote:

> Roel van Meer writes:
>
>  >> > I'm using squid 3.5.2 built with openssl 0.9.8zc on Slackware 13.1.
>>> >> > Traffic is redirected from port 443 top 3130 with iptables.
>>> >>
>>> >> ... and with an older version of OpenSSL missing many of the last few
>>> >> years worth of TLS crypto features. IIRC the library releases are now
>>> up
>>> >> to 1.1.* or something. Its best to keep that kind of thing operating
>>> the
>>> >> latest versions.
>>> >
>>> > I know it missing the latest features, but security patches are
>>> > backported. And I know it is old, but it's what I have to work with
>>> > now.Do you think it might be the cause of the problem I'm having with
>>> > peek/splice, or was it a general recommendation?
>>>
>>> Its a potential source of problems. Chrome is very much on the front
>>> line of the arms race attempting to stop things like SSL-Bump working.
>>> Firefox implement their own crypto library which tracks the latest TLS
>>> features at a similar speed of development.
>>> OpenSSL will be perpetually behind both of them, but at least the latest
>>> one(s) have better chances not to be advertising features they reject in
>>> "considered harmful" grounds.
>>>
>>
>> I'll have a go then at trying with a newer openssl and the patches from
>> thethread you mentioned.
>>
>
> With Squid 3.5.2 built with openssl 1.0.1k I can splice https connections
> with no trouble. Tested with Lync, Chrome, Firefox, and IE.
>
> So you were right. :) Thanks a lot for pointing me in the right direction!
>
> Cheers,
>
> Roel
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https traffic using squid and icap

2015-03-13 Thread mattatrmc
Hi Michael, 

I was just reading this thread, and I've been following the same line of
thought wrt to monitoring the contents of HTTPS payloads that way I can
conduct analysis with an IDS such as Snort, or Bro.  Did you end up having
any luck with ICAP, or is this a rabbit hole that I'm going down?

Cheers,

Matt



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/https-traffic-using-squid-and-icap-tp4660720p4670394.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Read Timeout

2015-03-13 Thread sci

Hi,

I search a solution for the follow "problem":

A website with a simple button (POST) needs more than 15min to send the 
information what the clients needs (a simple excel sheet).
We don't want to change the global settings at the "Read Timeout", so I 
try to find something like a ACL for the squid.


Is it possible to at a ACL to the squid.conf like:

# rule only for the slow website
acl slow-server dstdomain .someserver-slow.com
read_timeout 30 minutes slow-server
# end rule

--
regards,
sci
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: comm.cc:769: "Comm::IsConnOpen(conn)"

2015-03-13 Thread HackXBack
Dear amos i still have the same problem !!
please give advice about this critical problem



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-comm-cc-769-Comm-IsConnOpen-conn-tp4669842p4670392.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Editing Makefile.am to include static libraries

2015-03-13 Thread Amos Jeffries
On 14/03/2015 12:28 a.m., Priya Agarwal wrote:
> I tried what you advised. Getting the same error for both methods
> (./configure LDFLAGS=-L<../tmp/../lib CXXFLAGS=-I<.../tmp../include or
> editing Makefile.am appropriately). autoreconf is failing.

I see "<" characters in your paths. That is invalid. As is the -I paths
segments "..." and "tmp.." looks like you are missing '/' somewhere.


> And also I am getting many such warnings:
> 
> | src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
> (or '*_CPPFLAGS')
> | compat/Makefile.am:5:   'src/Common.am' included from here
> | src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
> (or '*_CPPFLAGS')
> | helpers/basic_auth/DB/Makefile.am:1:   'src/Common.am' included from here
> | src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
> (or '*_CPPFLAGS')
> | helpers/basic_auth/LDAP/Makefile.am:1:   'src/Common.am' included from
> here
> | src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
> (or '*_CPPFLAGS')
> | helpers/basic_auth/MSNT-multi-domain/Makefile.am:1:   'src/Common.am'
> included from here
> | src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
> (or '*_CPPFLAGS')
> 

Those are just warnings because you are working with an old Squid
version and autotools have changed their requirements since. The current
release dont have quite so many warnings (some remain). Those can be ignore.

I does mean that what I wrote as AM_CPPFLAGS needs to instead be written
as INCLUDES in your Squid versions Makefile.am.


> Final error:
> | autoreconf: automake failed with exit status: 1
> | ERROR: autoreconf execution failed.
> 
> So is something wrong with the path?

I see "<" characters in what you

> 
> I have attached the logfile as well which shows the detailed output.
> 

Buried in the warnings I see this:

src/Makefile.am:661: error: '#' comment at start of rule is unportable


automake syntax has two forms of comment.
 ## comments are autoreconf comments and ignored
 # comments are copied through as-is to the final Makefile

If you are using multi-line wrapped lists of things, that can cause
issues. Its easier to just never use comments inside the wrapped lines.


Other things to watch out for auth makefiles:

* indentation for rules needs to be one tab, not spaces. This needs
checking after each copy-paste you do.

* multi-line rules and lists use '\' character ending to explicitly
define the wrapping.
  Be careful that lists of libraries etc use them on each line up to,
but not on, the final line of the list.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Editing Makefile.am to include static libraries

2015-03-13 Thread Priya Agarwal
I tried what you advised. Getting the same error for both methods
(./configure LDFLAGS=-L<../tmp/../lib CXXFLAGS=-I<.../tmp../include or
editing Makefile.am appropriately). autoreconf is failing.
And also I am getting many such warnings:

| src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
(or '*_CPPFLAGS')
| compat/Makefile.am:5:   'src/Common.am' included from here
| src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
(or '*_CPPFLAGS')
| helpers/basic_auth/DB/Makefile.am:1:   'src/Common.am' included from here
| src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
(or '*_CPPFLAGS')
| helpers/basic_auth/LDAP/Makefile.am:1:   'src/Common.am' included from
here
| src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
(or '*_CPPFLAGS')
| helpers/basic_auth/MSNT-multi-domain/Makefile.am:1:   'src/Common.am'
included from here
| src/Common.am:16: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS'
(or '*_CPPFLAGS')

Final error:
| autoreconf: automake failed with exit status: 1
| ERROR: autoreconf execution failed.

So is something wrong with the path?

I have attached the logfile as well which shows the detailed output.




On Fri, Mar 13, 2015 at 3:24 PM, Amos Jeffries  wrote:

> On 13/03/2015 10:19 p.m., Priya Agarwal wrote:
> > Hi,
> >
> > I wanted to link certain static libraries and use them in squid source
> > code. I added the following lines in Makefile.am of the 'src' directory.
>
> Please be aware if you do that your new code and anything built from the
> Squid Makefile MUST be GPLv2 compliant.
>
>
> >
> > squid_LDFLAGS =
> >
> -L/media/NewVolume/yocto/build_t4240qds_release/tmp/work/ppce6500-fsl_networking-linux/usdpaa/git-r5/lib_powerpc/
> >
> > squid_LDLIBS = -lusdpaa_dma -lusdpaa_dpa_offload -lusdpaa_of
> -lusdpaa_ppac
> > -lusdpaa_qbman -lusdpaa_rmu -lusdpaa_srio -lusdpaa_dma_mem -lusdpaa_fman
> > -lusdpaa_pme \
> >-lusdpaa_process -lusdpaa_rman -lusdpaa_sec
> -lusdpaa_syscfg
> >
> > squid_CPPFLAGS =
> >
> -I/media/NewVolume/yocto/build_t4240qds_release/tmp/work/ppce6500-fsl_networking-linux/usdpaa/git-r5/include/
> >
> > But I think this wrong as libraries aren't getting linked. I am getting
> > "undefined reference to" errors.
> > So what variables should I add/change.
>
> Those options are auto-generated by the autotools build chain.
>
> The -L flag should be passed by the ./configure LDFLAGS= parameter or
> coded into Makefile.am variable as a relative path. Its unlikely the
> exact absolute path will remain the same even within your build
> machines. I see you are using some path under ".../tmp" for example.
>
>
> Libraries are added to squid_LDADD variable. That takes the form:
>
>  squid_LDADD += -lusdpaa_dma
> or
>  squid_LDADD += -L$(top_srcdir)/../usdpaa/ -lusdpaa_dma
>
>
> which assumes the squid and usdpaa library code checkouts are sitting
> next to each other.
>
>
> Same deal for the -I value in the ./configure CXXFLAGS= parameter, or in
> the Makefile.am as a relative path:
>
>  AM_CPPFLAGS += -I$(top_srcdir)/../usdpaa/include/
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


log.do_configure.3013
Description: Binary data
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Editing Makefile.am to include static libraries

2015-03-13 Thread Amos Jeffries
On 13/03/2015 10:19 p.m., Priya Agarwal wrote:
> Hi,
> 
> I wanted to link certain static libraries and use them in squid source
> code. I added the following lines in Makefile.am of the 'src' directory.

Please be aware if you do that your new code and anything built from the
Squid Makefile MUST be GPLv2 compliant.


> 
> squid_LDFLAGS =
> -L/media/NewVolume/yocto/build_t4240qds_release/tmp/work/ppce6500-fsl_networking-linux/usdpaa/git-r5/lib_powerpc/
> 
> squid_LDLIBS = -lusdpaa_dma -lusdpaa_dpa_offload -lusdpaa_of -lusdpaa_ppac
> -lusdpaa_qbman -lusdpaa_rmu -lusdpaa_srio -lusdpaa_dma_mem -lusdpaa_fman
> -lusdpaa_pme \
>-lusdpaa_process -lusdpaa_rman -lusdpaa_sec -lusdpaa_syscfg
> 
> squid_CPPFLAGS =
> -I/media/NewVolume/yocto/build_t4240qds_release/tmp/work/ppce6500-fsl_networking-linux/usdpaa/git-r5/include/
> 
> But I think this wrong as libraries aren't getting linked. I am getting
> "undefined reference to" errors.
> So what variables should I add/change.

Those options are auto-generated by the autotools build chain.

The -L flag should be passed by the ./configure LDFLAGS= parameter or
coded into Makefile.am variable as a relative path. Its unlikely the
exact absolute path will remain the same even within your build
machines. I see you are using some path under ".../tmp" for example.


Libraries are added to squid_LDADD variable. That takes the form:

 squid_LDADD += -lusdpaa_dma
or
 squid_LDADD += -L$(top_srcdir)/../usdpaa/ -lusdpaa_dma


which assumes the squid and usdpaa library code checkouts are sitting
next to each other.


Same deal for the -I value in the ./configure CXXFLAGS= parameter, or in
the Makefile.am as a relative path:

 AM_CPPFLAGS += -I$(top_srcdir)/../usdpaa/include/


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reverse Proxy Funny Logging Issue

2015-03-13 Thread Amos Jeffries
On 13/03/2015 4:31 a.m., dweimer wrote:
> On 01/23/2013 10:39 pm, Amos Jeffries wrote:
>> On 24/01/2013 4:13 a.m., dweimer wrote:
>>> On 2013-01-23 08:40, dweimer wrote:
 On 2013-01-22 23:30, Amos Jeffries wrote:
> On 23/01/2013 5:34 a.m., dweimer wrote:
>> I just upgraded my reverse proxy server last night from 3.1.20 to
>> 3.2.6, all is working well except one of my log rules, and I can't
>> figure out why.
>
> Please run "squid -k parse" and resolve the WARNING or ERROR which
> are listed.
>
> There are two possible reasons...
>
>>
>> I have a several sites behind the server, with dstdomain access
>> rules setup.
>>
>> acl website1 dstdomain www.website1.com
>> acl website2 dstdomain www.website2.com
>> acl website2 dstdomain www.website3.com
>
> Possible reason #1 (assuming thi is an accurate copy-n-paste from
> yoru config file).  you have no website3 ACL definition?

 That was a typo in the email, correct ACL is in the configuration,
 squid -k parse outputs no warnings or errors.

>
>> ...
>>
>> Followed by the access rules
>>
>> http_access allow website1
>> http_access allow website2
>> http_access allow website3
>> ...
>> http_access deny all
>>
>> Some are using rewrites
>> url_rewrite_program /usr/local/etc/squid/url_rewrite.py
>> url_rewrite_children 20
>> url_rewrite_access allow website1
>> url_rewrite_access allow website3
>> ...
>> url_rewrite_access deny all
>>
>> Then my access logs
>>
>> # First I grab everything in one
>> access_log daemon:/var/log/squid/access.log squid all
>>>
>
>> access_log daemon:/var/log/squid/website1.log combined website1
>> access_log daemon:/var/log/squid/website2.log combined website2
>> access_log daemon:/var/log/squid/website3.log combined website3
>> ...
>>
>> everything works, write down to one of the access logs, the data
>> shows up in the access.log file, the data shows up in the
>> individual logs for all the others, except that one.  If we use
>> website3 from the above example like my actual file the access
>> rule works on the url_rewrite_access allow line, but for some
>> reason is failing on the log line.  squid -k parse doesn't show
>> any errors, and shows a Processing: access_log
>> daemon:/var/log/squid/website3.log combined website3 line in the
>> output.
>>
>> The log in question was originally at the end of my access_log
>> list section, so I changed the order around to see if for some
>> reason it was only the last one not working, no change still only
>> that one not working, And the new last one in the list still works
>> as expected.
>>
>> I know the ACL is working as it works correctly on the rewrite
>> rule and the http access just above the log rules, anyone have any
>> ideas on how I can figure out why the log entry isn't working?

>>>
>>> Changed lines back to daemon, changed acl on logs to the rewrite side
>>> used on the cache_peer_access lines later in the configuration. 
>>> Works now, and logs even show up with the pre-rewrite rule host
>>> information...
>>>
>>> That does make me wonder why some lines were getting logged but not
>>> all, the sites I thought were working do have higher usage, maybe I
>>> was still missing a lot from them, and just not knowing it.  I guess
>>> I will see if my webalizer reports show a huge gain in hit count over
>>> the old records from the the 3.1.20 installation, of if this behavior
>>> is only evident in the 3.2 branch.
>>>
>>
>> I think you will find that the lines being logged previously were on
>> the requests which were either not rewritten at all or were re-written
>> from another requests URL which was being logged.
>>
>> Each of the ACL-driven directive labels in squid.conf is effectively
>> an event trigger script - deciding whether or not to perform some
>> action. This only makes sense testing when that action choice is
>> requried.  Squid processing pathway checks http_access first, ... then
>> some others, ... then url_rewriting, ... then the destination
>> selection (cache_peer and others), ... then when the transaction is
>> fully completed access_log output decision are done.
>>
>> Amos
> 
> Last night I applied the FreeBSD 10.1-RELEASE-p6 Update and Upgraded the
> ports which included Squid 3.4.12, I enabled the LAX HTTP option in the
> ports configuration with adds the --enable-http-violations compile
> option. With the intention to enable broken_posts option in the
> configuration. I will hopefully be able to apply any necessary changes
> to the production system after I test them now.
> When doing this update I did have a thought the system is running in a
> FreeBSD jail and not on the base system is there a chance this issue is
> caused by running within a jail? curious if any

Re: [squid-users] Captive Portal authentication in Intercept mode

2015-03-13 Thread Amos Jeffries
On 13/03/2015 10:10 p.m., James Harper wrote:
>> Hey,
>>
>> I have written a basic idea with a php "login portal" that can be seen at:
>> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/
>> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Conf
>> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/PhpLoginExample
>> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Python
>> http://wiki.squid-
>> cache.org/EliezerCroitoru/SessionHelper/SplashPageTemplate
>>
>> The idea is an IP session based login.
>> The user actively needs to login and it will login the user IP address.
>> The helper(s) logic is based on time since the last user login.
>> This idea can be used as a sketch for a more advanced options with a portal.
>>
>> There are other better ways to implement this idea and one of them is
>> using a radius server.
>>
>> As you noticed there is no way to directly authenticate a proxy in
>> intercept mode.
>> Maybe someone out-there have been thinking about a way to do such a
>> thing but it is yet to be possible with squid.
>>
> 
> If you could do ntlm auth at your portal page then the user might never even 
> notice that authentication took place...
> 
> You'd need to do some sort of browser detection though - browsers could 
> handle such authentication, but programs phoning home or otherwise using web 
> services would hate it.
> 

That auth trick is usable for any kind of HTTP auth the client software
supports. eg. Basic auth for the automated tools usually. It's just
authenticating to the portal web server. As long as the portals not
trying to do auth with the intercepted traffic its fine.

FYI: NTLM is probably amongst the worst ways to do it given all the
nastiness that has to take place for NTLM to "work".

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Editing Makefile.am to include static libraries

2015-03-13 Thread Priya Agarwal
Hi,

I wanted to link certain static libraries and use them in squid source
code. I added the following lines in Makefile.am of the 'src' directory.

squid_LDFLAGS =
-L/media/NewVolume/yocto/build_t4240qds_release/tmp/work/ppce6500-fsl_networking-linux/usdpaa/git-r5/lib_powerpc/

squid_LDLIBS = -lusdpaa_dma -lusdpaa_dpa_offload -lusdpaa_of -lusdpaa_ppac
-lusdpaa_qbman -lusdpaa_rmu -lusdpaa_srio -lusdpaa_dma_mem -lusdpaa_fman
-lusdpaa_pme \
   -lusdpaa_process -lusdpaa_rman -lusdpaa_sec -lusdpaa_syscfg

squid_CPPFLAGS =
-I/media/NewVolume/yocto/build_t4240qds_release/tmp/work/ppce6500-fsl_networking-linux/usdpaa/git-r5/include/

But I think this wrong as libraries aren't getting linked. I am getting
"undefined reference to" errors.
So what variables should I add/change.


Thanks.

Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Captive Portal authentication in Intercept mode

2015-03-13 Thread James Harper
> Hey,
> 
> I have written a basic idea with a php "login portal" that can be seen at:
> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/
> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Conf
> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/PhpLoginExample
> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Python
> http://wiki.squid-
> cache.org/EliezerCroitoru/SessionHelper/SplashPageTemplate
> 
> The idea is an IP session based login.
> The user actively needs to login and it will login the user IP address.
> The helper(s) logic is based on time since the last user login.
> This idea can be used as a sketch for a more advanced options with a portal.
> 
> There are other better ways to implement this idea and one of them is
> using a radius server.
> 
> As you noticed there is no way to directly authenticate a proxy in
> intercept mode.
> Maybe someone out-there have been thinking about a way to do such a
> thing but it is yet to be possible with squid.
> 

If you could do ntlm auth at your portal page then the user might never even 
notice that authentication took place...

You'd need to do some sort of browser detection though - browsers could handle 
such authentication, but programs phoning home or otherwise using web services 
would hate it.

James
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump for specific dstdomain

2015-03-13 Thread Amos Jeffries
On 13/03/2015 6:39 p.m., Yuri Voinov wrote:
> 
> 
> 13.03.15 2:37, Mukul Gandhi пишет:
>> On Thu, Mar 12, 2015 at 11:04 AM, Yuri Voinov  
>> wrote:
> 
>> You only have external helper (which is must wrote yourself) in 
>> 3.4.x.
> 
> 
>>> Are there any examples that I can look at to implemented this 
>>> external helper for doing selective ssl_bumps. And what would 
>>> this helper script do anyways? All we have is the destination IP 
>>> address which is not really going to give us the actual HTTP 
>>> hostname.
> Yes and no. There is one third-party helper in list archives, written
> on python. No one of this including in squid distribution.
> 
> 
>> Works with domains in ssl bump fully available at least 3.5.x
> 
> 
>>> Does the 3.5.x implementation decrypt the whole payload and then 
>>> do the ssl_bump? The "peek" option seems to imply that only the 
>>> HTTP headers are peeked at.
> Of course. As by 3.4.x. The difference is only with mechanisms.

And no at the same time. HTTP message headers inside the encryption are
encrypted and unavailable until after the decryption is decided (bumped).

What gets peeked at is the TLS ClientHello and TLS ServerHello details.
SNI may become available by peeking when raw-IP was all that was in the
HTTP CONNECT message or intercepted TCP packets.

You can then use those non-private TLS details to decide between reject,
splice (pass-thru) or bump (decrypt) for the encrypted HTTPS data.


> 
>>> I guess what I am asking is, is there any way we can do this 
>>> without actually decrypting the payload?
> 3.5.x peek-and-splise functionality do bump splitted by stages.
> Against 3.4.x, which is makes bump in one stage.
> 

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Captive Portal authentication in Intercept mode

2015-03-13 Thread Eliezer Croitoru

Hey,

I have written a basic idea with a php "login portal" that can be seen at:
http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/
http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Conf
http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/PhpLoginExample
http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Python
http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/SplashPageTemplate

The idea is an IP session based login.
The user actively needs to login and it will login the user IP address.
The helper(s) logic is based on time since the last user login.
This idea can be used as a sketch for a more advanced options with a portal.

There are other better ways to implement this idea and one of them is 
using a radius server.


As you noticed there is no way to directly authenticate a proxy in 
intercept mode.
Maybe someone out-there have been thinking about a way to do such a 
thing but it is yet to be possible with squid.


You can combine the php session login like in dyndns based solutions.
They offer a capability to re-register a domain based on your internet 
faced IP address.
Their client checks if the IP was changed and if so re-register vs the 
main server(with username and password).
So for example any new registration will revoke the old registration and 
any current registration is limited by to the current session life time.


Like in linux tcp_keep_alive there is an option to limit the session for 
5-10 minutes and if it will not be "keeped" alive after 2 hours it will 
be automatically revoked off access to the proxy.


The logic I have written can be implemented but should be carefully 
designed.


All The Bests,
Eliezer Croitoru

On 13/03/2015 07:25, Ashish Patil wrote:

Hello,

I am trying to set up a Captive Portal with Squid (v.3.5.2) in Intercept
mode and SquidGuard (v.1.5) as URL rewriter. The Captive portal works off
usernames in a database, but Squid + SquidGuard work based off IP's.

The most progress I have had just says Authentication by Squid cannot be
done with Squid acting as a Intercepting Proxy. Is there some helper (even
probably in beta stage) that could help me achieve this?



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] One Time Password with squid, exists?

2015-03-13 Thread Eliezer Croitoru

On 13/03/2015 05:22, Daniel Greenwald wrote:

Ah that would be a clever way to implement pki authentication but i was
thinking of something more that browser natively support..


Hey Daniel,

What is the direction of what you are thinking about?
I do not know about a browser natively support option.
The options I have seen until now are a local client in java or other 
languages which implements the whole secure level and the browser uses 
this layer to contact either a proxy or using a route as the default GW.


I think that it depends on the security level of the information in most 
cases.
If I do not trust the client\end machine such as in an Internet cafe I 
assume it will be pretty unsmart to assume any such implementation like 
pki will help.
If I do have a basic level of trust on the machine but not the network, 
it will be pretty simple to use a secure client like used in many cases 
with ssh tunnels.


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users