Hi,
The limitation is there with the service load balancer. If we are going
with that approach we need to patch kubernetes service load balancer code
to annotate the services with port definitions. The current approach taken
is to create two services for https and http.
We can use the node ports as well. Already the host ports are available in
AWS load balancer.

Regards
Nishadi

On Thu, Apr 7, 2016 at 10:14 PM, Imesh Gunaratne <im...@wso2.com> wrote:

>
>
> On Wed, Mar 16, 2016 at 10:49 AM, Nishadi Kirielle <nish...@wso2.com>
> wrote:
>>
>>
>> In the current deployment, we have tested a service with a single port
>> exposed. This is because the service identifies whether this is exposed to
>> http or https through the service annotation which is common to all exposed
>> ports in the service. If we are going with that approach, in order to
>> support http traffic and https traffic, we need several services. Thus,
>> currently I'm attempting to deploy a service with several exposed ports.
>>
>
> AFAIU this is not a restriction enforced by K8S services rather a
> limitation in the service load balancer (the way it uses service
> annotations) [3]. K8S services allow to define any number of annotations
> with any key/value pair. We can change the service load balancer to use an
> annotation per port to handle this.
>
>>
>> In addition, another concern is how the HAProxy load balancer itself is
>> exposed to external traffic. Currently it is done through host ports. If we
>> use node ports for this, it will expose the particular port in all the
>> nodes. But the use of host port will only expose the particular port in the
>> specified node.
>>
>
> Why do we use host ports instead of node ports? I believe traffic get
> delegated to HAProxy via an AWS load balancer. If so what would happen if
> the above host becomes unavailable?
>
> [3]
> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L468
>
> Thanks
>
>
>>
>> Appreciate your feedback on the approach taken.
>>
>> Thanks
>>
>> [1].
>> https://github.com/nishadi/contrib/commit/f169044546dc8a84a359d889bb186aef83d9c422
>> [2].
>> https://github.com/nishadi/contrib/blob/master/service-loadbalancer/rc.yaml#L52
>>
>> On Mon, Mar 14, 2016 at 10:39 AM, Nishadi Kirielle <nish...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>> +1 for going with SSL pass through approach. Once the testing with
>>> staging is done, I will focus on this approach.
>>>
>>> Thanks
>>>
>>> On Mon, Mar 14, 2016 at 10:29 AM, Manjula Rathnayake <manju...@wso2.com>
>>> wrote:
>>>
>>>> Hi Imesh,
>>>>
>>>> On Mon, Mar 14, 2016 at 10:20 AM, Imesh Gunaratne <im...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Manjula,
>>>>>
>>>>> On Mon, Mar 14, 2016 at 10:06 AM, Manjula Rathnayake <
>>>>> manju...@wso2.com> wrote:
>>>>>
>>>>>> Hi Imesh,
>>>>>>
>>>>>> On Mon, Mar 14, 2016 at 9:56 AM, Imesh Gunaratne <im...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 13, 2016 at 11:36 PM, Nishadi Kirielle <nish...@wso2.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi all,
>>>>>>>> Currently I'm working on configuring HAProxy load balancing support
>>>>>>>> for app cloud.
>>>>>>>> In checking the session affinity functionality in kuberenetes, I
>>>>>>>> have verified the load balancing of http traffic with HAProxy. It 
>>>>>>>> could be
>>>>>>>> done using kubernetes contribution repo, 'service loadbalancer' [1].
>>>>>>>>
>>>>>>>> In order to check the load balancing with https traffic the taken
>>>>>>>> approach is SSL termination.In the scenario of app cloud, kubernetes
>>>>>>>> cluster is not directly exposed and the load balancer exists within the
>>>>>>>> cluster. Thus the communication between the application servers and the
>>>>>>>> load balancer happens internally. Although SSL termination ends the 
>>>>>>>> secure
>>>>>>>> connection at the load balancer, due to the above mentioned reasons, 
>>>>>>>> SSL
>>>>>>>> termination seems to be a better solution. The reason for the use of 
>>>>>>>> SSL
>>>>>>>> termination over SSL pass through is because of the complexity of 
>>>>>>>> handling
>>>>>>>> a separate SSL certificate for each server behind the load balancer in 
>>>>>>>> the
>>>>>>>> case of SSL pass through.
>>>>>>>>
>>>>>>>> -1 for this approach, IMO this has a major security risk.
>>>>>>>
>>>>>>> Let me explain the problem. If we offload SSL at the service load
>>>>>>> balancer, all traffic beyond the load balancer will use HTTP and the
>>>>>>> message content will be visible to anyone on network inside K8S. Which
>>>>>>> means someone can simply start a container in K8S and trace all HTTP
>>>>>>> traffic going through.
>>>>>>>
>>>>>>
>>>>>
>>>>>> Below is from HA Proxy documentation[1]. AFAIU, HA Proxy to backend
>>>>>> server communication happens with HTTPS enabled but not validating the
>>>>>> server certificate.
>>>>>>
>>>>>
>>>>>
>>>>>> verify
>>>>>> <http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-verify>
>>>>>> [none|required]
>>>>>>
>>>>>> This setting is only available when support for OpenSSL was built in. If 
>>>>>> set
>>>>>> to 'none', server certificate is not verified. In the other case, The
>>>>>> certificate provided by the server is verified using CAs from 'ca-file'
>>>>>> and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not 
>>>>>> specified
>>>>>> in global  section, this is the default. On verify failure the handshake
>>>>>> is aborted. It is critically important to verify server certificates when
>>>>>> using SSL to connect to servers, otherwise the communication is prone to
>>>>>> trivial man-in-the-middle attacks rendering SSL totally useless.
>>>>>>
>>>>>> IMO still there is a major problem if we are not verifying the SSL
>>>>> certificate. See the highlighted text.
>>>>>
>>>> +1. We will attend to this once initial end to end scenario got working
>>>> in App Cloud. I am +1 for using a self signed cert in pods and adding it to
>>>> truststore of HA Proxy to fix above issue.
>>>>
>>>> thank you.
>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>> [1].
>>>>>> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl%20%28Server%20and%20default-server%20options%29
>>>>>>
>>>>>> thank you.
>>>>>>
>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> In configuring load balancing with SSL termination, I had to
>>>>>>>> customize kubernetes haproxy.conf file template of service loadbalancer
>>>>>>>> repo to support SSL termination.
>>>>>>>>
>>>>>>>> In order to provide SSL termination, the kubernetes services have
>>>>>>>> to be annotated with
>>>>>>>>       serviceloadbalancer/lb.sslTerm: "true"
>>>>>>>>
>>>>>>>> The default approach in load balancing with service load balancer
>>>>>>>> repo is based on simple fan out approach which uses context path to 
>>>>>>>> load
>>>>>>>> balance the traffic. As we need to load balance based on the host 
>>>>>>>> name, we
>>>>>>>> need to go with the name based virtual hosting approach. It can be 
>>>>>>>> achieved
>>>>>>>> via the following annotation.
>>>>>>>>      serviceloadbalancer/lb.Host: "<host-name>"
>>>>>>>>
>>>>>>>> Any suggestions on the approach taken are highly appreciated.
>>>>>>>>
>>>>>>>> Thank you
>>>>>>>>
>>>>>>>> [1].
>>>>>>>> https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Imesh Gunaratne*
>>>>>>> Senior Technical Lead
>>>>>>> WSO2 Inc: http://wso2.com
>>>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>>>> W: http://imesh.io
>>>>>>> Lean . Enterprise . Middleware
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Manjula Rathnayaka
>>>>>> Associate Technical Lead
>>>>>> WSO2, Inc.
>>>>>> Mobile:+94 77 743 1987
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Imesh Gunaratne*
>>>>> Senior Technical Lead
>>>>> WSO2 Inc: http://wso2.com
>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>> W: http://imesh.io
>>>>> Lean . Enterprise . Middleware
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Manjula Rathnayaka
>>>> Associate Technical Lead
>>>> WSO2, Inc.
>>>> Mobile:+94 77 743 1987
>>>>
>>>
>>>
>>>
>>> --
>>> *Nishadi Kirielle*
>>> *Software Engineering Intern*
>>> Mobile : +94 (0) 714722148
>>> Blog : http://nishadikirielle.blogspot.com/
>>> nish...@wso2.com
>>>
>>
>>
>>
>> --
>> *Nishadi Kirielle*
>> *Software Engineering Intern*
>> Mobile : +94 (0) 714722148
>> Blog : http://nishadikirielle.blogspot.com/
>> nish...@wso2.com
>>
>> _______________________________________________
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io
> Lean . Enterprise . Middleware
>
>


-- 
*Nishadi Kirielle*
*Software Engineering Intern*
Mobile : +94 (0) 714722148
Blog : http://nishadikirielle.blogspot.com/
nish...@wso2.com
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to