On 4/02/2013 10:14 p.m., PARAM KRISH wrote:
Hi Amox, Thanks so much for helping out... Here are my replies ..

On Mon, Feb 4, 2013 at 1:39 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
On 4/02/2013 8:22 p.m., PARAM KRISH wrote:
Hello Squid Users

I am in need of some help. I believe this must have been done by some
already but i could not just get the right document to understand
well.

Setup :

Squid v3.2.3 on a RHEL 6 64-bit VM (say server1)
Apache's 2.2.15 on RHEL6 64-bit VM's (running in server1 and in server2).

Requirement:

1. All http traffic should automatically redirect to https without a
change in the url (except http -> https )
2. I have two URL's  A.B.C.D and A.C.D both pointing to the server1 IP
must work well.
3. When Apache in server1 dies, all traffic should work well with
Apache in server2. All traffic must use Squid for
ReverseProxy/Cacheing

What i have done so far ?

1. Apache V-hosts (8080 & 443 ) : 8080 V-hosts does RewriteRule to https
2. Squid listens in 80 and 3123 (for ssl) having these entries in its
config.

Why 3123 (er actually 3128)? having different ports on the public-facing
proxy to what the backends are using is a common cause of trouble because
virtual-ports in HTTP are a tricky concept. Speaking of which...

Also 3128 is a well-known proxy port. Using it for things other that normal
forward proxy traffic is another common cause of trouble. Usually from the
viral/worm side of the Internet.
3128 is what i am using. Sorry it was a typo of 3123. so, what port
are you recommending to use if not this ? If i make it to listen to
the same port of apache, everytime i restart Apache or squid , either
of them complain that the port is already in-use.


This only happens if the IP address and port is identical. Which is why I mentioned having the Apache on their own IP's. One you have Squid operating they do not need to be accessible at all to anyone but Squid.


https_port 3128 accel cert=/usr/local/apache2/conf/folder/certnew.cer
key=/usr/local/apache2/conf/folder/a.b.c.d.key defaultsite=a.b.c.d
vhost

This basic style SSL config does not support virtual hosting. You require
the certificate generator options to create certificates for each virtual
hosted FQDN on demand.
YHBW: This is not a very commonly used configuration (yet) and we have had
little feedback as to how well (or not) it works.

NP: if you only have the one domain FQDN being hosted, OR multiple domains
all as aliases of one main domain, ... drop the vhost option.
Our servers are for internal use and we use only self-signed
certificates. Domains are local's and are in DNS rightly added.

You missed the point. The config you have is for servicing only the domain "a.b.c.d" certificate key. No other domain will work properly using that certificate. You *need* the dynamic certificate generation feature of Squid when using "vhost" on https_port.


http_port 80 accel defaultsite=a.b.c.d vhost

cache_peer server1.b.c.d parent 8080 0 originserver round-robin
cache_peer server2.b.c.d parent 8080 0 originserver round-robin

What i understand is, requests coming to Port 80 (Squid) forwarded to
either of Apache's 8080 where it gets rewritten to 443, processed.

Erm, all requests arriving in both port 80 and port 3128 to your Squid are
sent to the Apaches port 8080 as plain-HTTP traffic. There is no distinction
between HTTP and HTTPS in a reverse-proxy. HTTPS is just a wire-level
TLS/SSL wrapping around regular HTTP after all. Once the reverse-proxy
https_port terminates the TLS/SSL it is gone.

I think what you are actually seeing is that the Apache are handling the
traffic and have been configured to ensure all public URLs are
https://a.b.c.d:3128/, yes?
No public URL's here. All internal links and available only for the
internal employees.

'public' is a viewpoint relative to the service. I this case you have two levels of 'internal' with the users / clients being 'public'-internal and the Apachies being 'private'-internal.


My requirement is that all the http request to those two URL's
(a.b.c.d and a.c.d) must get redirected(or rewritten) to https
automatically.
I was thinking having a Squid doing reverse proxy sniffing at  443
would help to pick up from cache the next time we get the request,
since this requirement is for mediawiki. Are you saying this wont do
well or use-less ?

No. that is the wrong way to go about it.

You have the DNS for the domain pointing at Squid's IP. Have Squid listening on both port 80 and 443 and receiving traffic. Your Squid-3.2 can test for HTTP vs HTTPS and redirect clients to the same URL under https:// like so...

 acl HTTP myportname 80
 http_access deny HTTP
 deny_info https://%H%R HTTP

The port-443 traffic gets past that and gets delivered to the Apaches.


Also, when i tried like, squid listening on 443 through cache_peer of
server1:443 and server2:443, i got something like this when apache1
goes down.

Bad Request

Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
Hint: https://server2.b.c.d/

The hinted link when i click works fine but i obviously want this to
be taken care automatically rather than popping to the user know that
this is happening when server1 apache is dead.

Hmm. Okay that is the same problem as before, just with a different error message.

Having thought about it a bit more I think you are hitting the bug where round-robin, like all the specialized load balancing algorithms, presents Squid with only one option of forwarding path at a time. So the failover which Squid normally tries to do cannot work. Is there a particular reason you need round-robin? or would it work just fine with two peers and letting Squid do failver when one gets loaded/slow?

If you do require round-robin perhapse you would sponsor some work (either in paid developer hours, or donating a patch) towards making that algorithm work better with 3.2+?

Amos

Reply via email to