There is a project for icap that does exactly what you want. This is like a
L8 filter, meanwhile dns is L5.

The higher, the better
On Jul 31, 2015 5:20 PM, "Amos Jeffries" <squ...@treenet.co.nz> wrote:

> On 1/08/2015 8:49 a.m., Stanford Prescott wrote:
> > Hi Amos. I wanted to try out the "ssl-bump splice" to send traffic to a
> > peer found in the recent snapshots for 3.5.6/7 to block Google images. I
> > compiled configured and installed the latest 3.5 snapshot and added the
> > directives you listed above to squid.conf but I am not sure I got them
> > right.
> >
> >
> > acl s1_tls_connect      at_step SslBump1
> > acl s2_tls_client_hello at_step SslBump2
> > acl s3_tls_server_hello at_step SslBump3
> > acl tls_server_name_is_ip ssl::server_name_regex
> ^[0-9]+.[0-9]+.[0-9]+.[0-9]+n
> > acl google ssl::server_name .google.com
> >
> > ssl_bump peek s1_tls_connect      all
> > acl nobumpSites ssl::server_name .wellsfargo.com
> > ssl_bump splice s2_tls_client_hello nobumpSites
> > ssl_bump splice s2_tls_client_hello google
> > ssl_bump stare s2_tls_client_hello all
> > ssl_bump bump  s3_tls_server_hello all
> >
> > cache_peer forcesafesearch.google.com parent 443 0 \
> > name=GS originserver no-query no-netdb-exchange no-digest
>
> Sorry, I missed out the 'ssl' option on the peer.
>
> > acl search dstdomain .google.com
> > cache_peer_access GS allow search
> >
> cache_peer_access GS deny all
> > sslproxy_cert_error allow tls_server_name_is_ip
> > sslproxy_cert_error deny all
> > sslproxy_flags DONT_VERIFY_PEER
> >
> > When restarting Squid and searching in Google images for "sex" it still
> > shows images that I want to be able to block with safesearch.
>
> Other than the it I missed out mentioning. it looks okay to me. Though I
> have not tested any of this myself so YMMV.
>
> Amos
>
> >
> > On Thu, Jul 16, 2015 at 11:24 PM, Amos Jeffries wrote:
> >
> >> On 19/05/2015 5:49 a.m., Andres Granados wrote:
> >>> hello!I need help on how to block pornographic images of google, I
> >>> was trying different options and still do not succeed, try:
> >>> http_reply_access with request_header_add, and even with a
> >>> configuration dns, I think is to request_header_add the best, though
> >>> not it has worked for me, I hope your help, is to implement a school,
> >>> thanks!
> >>>
> >>
> >> FYI; Christos has added a tweak to the "ssl-bump splice" handling that
> >> permits sending the traffic to a cache_peer configured something like
> this:
> >>
> >>  acl example ssl::server_name .example.com
> >>  ssl_bump splice example
> >>  ssl_bump peek all
> >>
> >>  cache_peer forcesafesearch.example.com parent 443 0 \
> >>     name=GS \
> >>     originserver no-query no-netdb-exchange no-digest
> >>
> >>  acl search dstdomain .example.com
> >>  cache_peer_access GS allow search
> >>  cache_peer_access GS deny all
> >>
> >> The idea being that you can use this on intercepted (or forward-proxy)
> >> HTTPS traffic instead of hacking about with DNS to direct clients at the
> >> servers Google use to present "safe" searching.
> >>
> >> This should be available in 3.5.7, or the current 3.5 snaphots.
> >>
> >> Cheers
> >> Amos
> >> _______________________________________________
> >> squid-users mailing list
> >> squid-users@lists.squid-cache.org
> >> http://lists.squid-cache.org/listinfo/squid-users
> >>
> >
>
> _______________________________________________
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Reply via email to