Fundamentally, my intent is to set up Squid for home use to block
advertising, malware, and in particular, perform content adaptation.
One of my specific goals is to modify search URL paths to restrict
explicit search returns (e.g. affixing safe=active to any Google
search path)
Hi David, I
I can turn off X-Forwarded-For header in Squid completley by using directive
forwarded_for off or forwarded_for delete globally. I would like to
disable that header only for specific ACLs, so I can disable this header
only for given URLs and have it enabled for others. Is there any way to do
that?
Can you check if the following solution works for you?
On Wed, Jul 9, 2014 at 7:53 PM, joseph_jose joevyp...@gmail.com wrote:
I can turn off X-Forwarded-For header in Squid completley by using directive
forwarded_for off or forwarded_for delete globally. I would like to
disable that header
Sorry for the last email. Forgot to put the link.
See if the following solution works for you:
http://serverfault.com/questions/571895/squid-disable-x-forwarded-for-but-only-for-specific-acls
Regards
HASSAN
On Wed, Jul 9, 2014 at 8:35 PM, Nyamul Hassan nya...@gmail.com wrote:
Can you check
Unfortunately no, because each system has minor differences, the desired
rules to be used by squid varies based on other programs and
interactions within the system. This is why I just typed them out here
so others can figure out why squid or pinger or ssl_crtd is getting
caught by selinux,
Hello,
I want do make a combined cache by defining two proxies in a cluster as
siblings. To minimize traffic and latency, I want to do this by using cache
digests and not by using ICP.
Squid is compiled with --enable-cache-digest, squid version is 3.3.12
cache_peer configuration in
Hi,
I've been trying to set up squid for a variety of reasons, but one is
to cache commercials from HULU. HULU takes up a huge portion of my
bandwidth. It'd be pointless to try to cache the TV shows since we
typically only watch them once (I think they're encrypted anyway), but
I'd like to cache
On 07/08/2014 08:17 PM, David Marcos wrote:
b. HTTP Strict Transport Security (HSTS): Some pages flat-out
reject any SSL bumping due to HSTS. I am using Chrome, which I'm sure
aggravates the issue. Is there a way to configure Squid to get around
HSTS? (Yes, I know this may be a dumb
Fought with this one for days, and figured it out hours after sending
this email. I apologize for the noise. I'm replying to my own message
in case it helps someone else.
For reasons I don't quite understand, maximum_object_size 2048 MB
wasn't sufficient to let it cache large objects into
On Wednesday 09 July 2014 at 21:54:15, Ian Nofziger wrote:
Fought with this one for days, and figured it out hours after sending
this email. I apologize for the noise. I'm replying to my own message
in case it helps someone else.
For reasons I don't quite understand, maximum_object_size
On Wed, Jul 9, 2014 at 4:16 PM, Antony Stone
antony.st...@squid.open.source.it wrote:
On Wednesday 09 July 2014 at 21:54:15, Ian Nofziger wrote:
Fought with this one for days, and figured it out hours after sending
this email. I apologize for the noise. I'm replying to my own message
in case
Running into this issue on one powerful system. OS (Scientific Linux
6.5) sees 16 CPU cores (which is 2 CPU sockets, each with 4 cores +
Hyperthreading). The unusual part is that this same setup works fine on
another system with dual core + HT using 3 workers.
I tried to setup the SMP options
Have a look here for a correct solution:
http://wiki.squid-cache.org/ConfigExamples/Intercept/AtSource
(Example: Replace SQUIDIP with the public IP which squid may use for its
listening port and outbound connections. )
iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
Alex, et al,
Thanks very much for the suggestions. The tip-off that HSTS issues
may actually be a symptom, not the problem, was key. Turns out I did
not properly install my self-signed root certificate into my laptop.
Once I fixed that, everything started working.
Thanks again for the help!
Hi,
I'm using Microtik 1100 AH X2 Router,
here is my Basic Data from your latest script.
http://pastebin.com/GHkD5yYx
Thanks,
Ganesh J
On Wed, Jul 9, 2014 at 1:08 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
What router are you using??
Eliezer
P.S. I will be at the squid irc channel
What are the rules in Mikrotik that you are using? What is the
network diagram? How many interfaces on Mikrotik are you using for
this purpose? How many NICs are there on the Squid box? Can you give
an idea of your network diagram?
Also, a few days ago, I also posted the rules that I am using
I use two ports in Micortik Router. one for WAN and other for LAN, I
have No rules setup in Router except the natting Src and Dst for
private to public IP and vice versa.
There are two nics in squid box. but I am using only one.
The Lan From router is Connected to switch and the squid nic is
There you go. NAT rules will not work on TProxy. You need to play
with Mangle rules. The ones I am using are:
/ip fir man
add action=mark-routing chain=prerouting disabled=no dst-port=80
new-routing-mark=_to_squid_ passthrough=yes protocol=tcp
src-address-list=_to_squid_ src-mac-address=!MAC
That's very odd. I'd try calling them... There are quite a few folks
blocking proxies these days. What I do is remove the via and
forwarded for headers with the following command:
check_hostnames off
forwarded_for delete
via off
The same configuration in an earlier version of squid doesn;t
On 07/10/2014 05:05 AM, sq...@proxyplayer.co.uk wrote:
The same configuration in an earlier version of squid doesn;t get
rejected by Google but in the new version of squid it is rejected by
Google so is it possible squid is doing something differently?
Probably not too much...
What version of
20 matches
Mail list logo