Hi Devs,

I have now completed the initial implementation of the KubernetesIaas and
pushed changes to master branch.

Functionality:
- A member is mapped to a Kubernetes Pod.

- A replication controller is created for each member. This is to avoid the
following complications that may arise when a replication controller is
mapped to a cluster:
  1. Member specific payload parameters (member id, cluster instance id,
etc) cannot be passed into the container. As a result instance status
events and health statistics will not work properly.
  2. If a container is healed by the replication controller on a failure,
Stratos will not know about this event. As a result Stratos will not know
the new pod id generated.
  3. Since container failure handling is done by both Kubernetes and
Autoscaler, migtht need to have proper logic to synchronize these actions.

- At a later stage will do more research on the above concerns and try to
map replication controller to a cluster to gain advantage of Kubernetes
high availability, faulty member handling, etc features.

- Dynamic member specific payload is passed to the container via
environment variables.

- Iaas interface startInstance(), terminateInstance(), setDynamicPayload()
and getPartitionValidator() methods are implemented.

Thanks


On Sun, Dec 21, 2014 at 4:09 AM, Imesh Gunaratne <im...@apache.org> wrote:

> I have now committed the initial modifications to support this
> functionality:
>
> This includes following changes:
> - Introduced KubernetesIaas class
> - Removed Cartridge.deployerType
> - Fixed Member.instanceId property conflict by introducing
> Member.clusterInstanceId
> - Now Member has two properties: instanceId -> id generated by the Iaas,
> clusterInstanceId -> cluster instance id of the application hierarchy
> - Updated all instance status and member events with the above change
> - Updated python cartridge agent with the above event change
> - Updated health statistics events, stream definitions, event publishers
> with instance_id -> cluster_instance_id attribute change
> - Updated CloudControllerService.createInstance() method by introducing a
> new parameter class "InstanceContext". Earlier  this method accepted
> MemberContext as the incoming parameter so that it was difficult to
> identify which parameter values are required to create an instance.
>
> I'm currently working on functionality in KubernetesIaas class.
>
> Thanks
>
> On Sat, Dec 20, 2014 at 8:33 AM, Imesh Gunaratne <im...@apache.org> wrote:
>
>> Thanks for the feedback Lakmal!
>>
>> On Fri, Dec 19, 2014 at 7:54 PM, Lakmal Warusawithana <lak...@wso2.com>
>> wrote:
>>
>>> +1 Imesh, this is what in mind also.
>>>
>>> On Fri, Dec 19, 2014 at 7:35 PM, Imesh Gunaratne <im...@apache.org>
>>> wrote:
>>>>
>>>> Hi Devs,
>>>>
>>>> As we have now removed Kubernetes specific cluster monitoring logic in
>>>> Autoscaler we could now use standard Cloud Controller service methods for
>>>> managing VM instances and containers. This will make sure that Autoscaling
>>>> logic will work the same way for any type of a cartridge.
>>>>
>>>> The idea is to move Kubernetes specific logic to a new class called
>>>> KubernetesIaas and implement the Iaas interface. Consequently almost all
>>>> the features in the PaaS will work the same manner for MockIaas, Jclouds
>>>> Iasses and Kubernetes. This will give us an advantage   of verifying
>>>> functionality with Mock Iaas and running against other Iaases without much
>>>> of a problem.
>>>>
>>>> This class can be defined in the Iaas Providers section in the
>>>> cartridge definition. The complete work flow of an application that uses
>>>> Kubernetes would be as follows:
>>>>
>>>> *Work Flow of an Application using Kubernetes:*
>>>>
>>>> 1. Register Kubernetes clusters.
>>>> 2. Define Kuberntes Iaas Provider in the cartridge, this will indicate
>>>> Stratos that the given cartridge needs Kubernetes support.
>>>>     Cartridge -> Iaas Providers -> Kubernetes Iaas Provider
>>>> 3. Define Kuberntes clusters in the Network Partitions. All the
>>>> partitions in the above network partitions will use the same configuraiton.
>>>>     Deployment Policy -> Network Partitions -> Kubernetes Cluster
>>>> 4. Define an application with the above cartrige
>>>>     Application -> Cartridges
>>>>
>>>> According to the above configuration when Autoscaler asks Cloud
>>>> Controller to start an instance it will find the Iaas Provider in the
>>>> relevant partition and if it is Kubernetes it will find the Kubernetes
>>>> Cluster defined against the partition and use that information to start the
>>>> containers. Instance termination process would work the same way.
>>>>
>>>> I have now started implementing this logic, please add your thoughts.
>>>>
>>>> Thanks
>>>>
>>>> --
>>>> Imesh Gunaratne
>>>>
>>>> Technical Lead, WSO2
>>>> Committer & PMC Member, Apache Stratos
>>>>
>>>
>>>
>>> --
>>> Lakmal Warusawithana
>>> Vice President, Apache Stratos
>>> Director - Cloud Architecture; WSO2 Inc.
>>> Mobile : +94714289692
>>> Blog : http://lakmalsview.blogspot.com/
>>>
>>>
>>
>>
>> --
>> Imesh Gunaratne
>>
>> Technical Lead, WSO2
>> Committer & PMC Member, Apache Stratos
>>
>
>
>
> --
> Imesh Gunaratne
>
> Technical Lead, WSO2
> Committer & PMC Member, Apache Stratos
>



-- 
Imesh Gunaratne

Technical Lead, WSO2
Committer & PMC Member, Apache Stratos

Reply via email to