Hi all,
+1 for going with SSL pass through approach. Once the testing with staging
is done, I will focus on this approach.

Thanks

On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake <manju...@wso2.com>
wrote:

> Hi Imesh,
>
> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne <im...@wso2.com> wrote:
>
>> Hi Manjula,
>>
>> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake <manju...@wso2.com>
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne <im...@wso2.com> wrote:
>>>
>>>>
>>>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle <nish...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>> Currently I'm working on configuring HAProxy load balancing support
>>>>> for app cloud.
>>>>> In checking the session affinity functionality in kuberenetes, I have
>>>>> verified the load balancing of http traffic with HAProxy. It could be done
>>>>> using kubernetes contribution repo, 'service loadbalancer' [1].
>>>>>
>>>>> In order to check the load balancing with https traffic the taken
>>>>> approach is SSL termination.In the scenario of app cloud, kubernetes
>>>>> cluster is not directly exposed and the load balancer exists within the
>>>>> cluster. Thus the communication between the application servers and the
>>>>> load balancer happens internally. Although SSL termination ends the secure
>>>>> connection at the load balancer, due to the above mentioned reasons, SSL
>>>>> termination seems to be a better solution. The reason for the use of SSL
>>>>> termination over SSL pass through is because of the complexity of handling
>>>>> a separate SSL certificate for each server behind the load balancer in the
>>>>> case of SSL pass through.
>>>>>
>>>>> -1 for this approach, IMO this has a major security risk.
>>>>
>>>> Let me explain the problem. If we offload SSL at the service load
>>>> balancer, all traffic beyond the load balancer will use HTTP and the
>>>> message content will be visible to anyone on network inside K8S. Which
>>>> means someone can simply start a container in K8S and trace all HTTP
>>>> traffic going through.
>>>>
>>>
>>
>>> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
>>> server communication happens with HTTPS enabled but not validating the
>>> server certificate.
>>>
>>
>>
>>> verify
>>> <http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-verify>
>>> [none|required]
>>>
>>> This setting is only available when support for OpenSSL was built in. If set
>>> to 'none', server certificate is not verified. In the other case, The
>>> certificate provided by the server is verified using CAs from 'ca-file'
>>> and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
>>> in global  section, this is the default. On verify failure the handshake
>>> is aborted. It is critically important to verify server certificates when
>>> using SSL to connect to servers, otherwise the communication is prone to
>>> trivial man-in-the-middle attacks rendering SSL totally useless.
>>>
>>> IMO still there is a major problem if we are not verifying the SSL
>> certificate. See the highlighted text.
>>
> +1. We will attend to this once initial end to end scenario got working in
> App Cloud. I am +1 for using a self signed cert in pods and adding it to
> truststore of HA Proxy to fix above issue.
>
> thank you.
>
>>
>> Thanks
>>
>>
>>> [1].
>>> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl%20%28Server%20and%20default-server%20options%29
>>>
>>> thank you.
>>>
>>>
>>>> Thanks
>>>>
>>>> In configuring load balancing with SSL termination, I had to customize
>>>>> kubernetes haproxy.conf file template of service loadbalancer repo to
>>>>> support SSL termination.
>>>>>
>>>>> In order to provide SSL termination, the kubernetes services have to
>>>>> be annotated with
>>>>>       serviceloadbalancer/lb.sslTerm: "true"
>>>>>
>>>>> The default approach in load balancing with service load balancer repo
>>>>> is based on simple fan out approach which uses context path to load 
>>>>> balance
>>>>> the traffic. As we need to load balance based on the host name, we need to
>>>>> go with the name based virtual hosting approach. It can be achieved via 
>>>>> the
>>>>> following annotation.
>>>>>      serviceloadbalancer/lb.Host: "<host-name>"
>>>>>
>>>>> Any suggestions on the approach taken are highly appreciated.
>>>>>
>>>>> Thank you
>>>>>
>>>>> [1].
>>>>> https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Imesh Gunaratne*
>>>> Senior Technical Lead
>>>> WSO2 Inc: http://wso2.com
>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>> W: http://imesh.io
>>>> Lean . Enterprise . Middleware
>>>>
>>>>
>>>
>>>
>>> --
>>> Manjula Rathnayaka
>>> Associate Technical Lead
>>> WSO2, Inc.
>>> Mobile:+94 77 743 1987
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Manjula Rathnayaka
> Associate Technical Lead
> WSO2, Inc.
> Mobile:+94 77 743 1987
>



-- 
*Nishadi Kirielle*
*Software Engineering Intern*
Mobile : +94 (0) 714722148
Blog : http://nishadikirielle.blogspot.com/
nish...@wso2.com
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to