On Mon, Jan 22, 2018 at 2:46 PM, Pubudu Gunatilaka <pubu...@wso2.com> wrote:

> Hi Imesh,
>
> It is very convenient if we can reuse the docker image. AFAIU if we follow
> the above approach we can use a single docker image in all the container
> platforms.
>
> One of the drawbacks I see with this approach is that the user has to
> update the volume mounts with the necessary jar files, JKS files, etc. If
> any user tries this approach in Kubernetes, he has to add those jar files
> and binary files to the NFS server (To the volume which holds NFS server
> data). This affects the installation experience.
>
> IMHO, we should minimize the effort in trying out the WSO2 products in
> Kubernetes or any container platform. Based on the user need, he can switch
> to their own deployment approach.
>

Thanks for the quick response Pubudu! Yes, that's a valid concern. With the
proposed approach user would need to execute an extra step to copy required
files to a set of volume mounts before executing the deployment. In a
production deployment I think that would be acceptable as there will be
more manual steps involved such as creating databases, setting up CI/CD,
deployment automation, etc. However, in an evaluation scenario when someone
is executing a demo it might become an overhead.

I also noticed that kubectl cp command can be used to copy files from a
local machine to a container. Let's check whether we can use that approach
to overcome this issue:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp

On Mon, Jan 22, 2018 at 3:03 PM, Isuru Haththotuwa <isu...@wso2.com> wrote:

> In API Manager K8s artifacts, what we have followed is not having an
> image-per-profile method. With the introduction of Config Maps, it has
> become only two base images - for APIM and Analytics. Its extremely helpful
> from the maintenance PoV that we have a single set of Dockerfiles, but has
> a tradeoff with automation level AFAIU, since the user might have manual
> steps to perform.
>

​Thanks Isuru for the quick response! What I meant by image per profile is
that, in products like EI and SP, we would need a Docker image per profile
due to their design.​

>
> Its would be still possible to to write a wrapper script for a single set
> of Dockerfiles so that we can copy the artifacts, etc. using a single
> Docker image, but still that script would need to be maintained.
>

​A good point! I think it would be better to have a one to one mapping
between the Docker images and the Dockerfiles to make it easier for users
to understand how Docker images are built.​

>
> What if we go for a hybrid mode - not using Dockerfile per product profile
> or a single set of Dockerfiles for all, but use a specific set of
> Dockerfiles for a platform (Kubernetes, DC/OS,  etc.)? Also we need to be
> open for any other platform that would need to support in future.
>

​Yes, I think that's what we have at the moment and with that approach
every other container platform we need to support we need to create a new
set of Docker images.

Thanks
Imesh

>
> On Mon, Jan 22, 2018 at 1:36 PM, Imesh Gunaratne <im...@wso2.com> wrote:
>
>> Hi All,
>>
>> Currently, we build Docker images for each platform (Docker, Kubernetes,
>> DC/OS, etc) for each WSO2 product profile (EI: Integrator, MB, BPS; API-M:
>> Gateway, Key Manager, Pub/Store, etc). AFAIU, the main reason to do this
>> was bundling platform specific JAR files (membership scheme JAR file for
>> clustering) and platform specific filesystem security permission management
>> (mainly for OpenShift).
>>
>> With the recent refinements we did in Dockerfiles, Docker Compose
>> templates we found that the same set of Docker images can be used in all
>> container platforms if we follow below approach:
>>
>>    - Create the product profile Docker images by including the product
>>    distribution, and the JDK.
>>    - Provide configurations using volume mounts (on Kubernetes use
>>    ConfigMaps)
>>    - Provide JAR files and other binary files using volume mounts
>>    - Use a standard permission model for accessing volume mounts in
>>    runtime:
>>       - Use a none root user to start the container: wos2carbon (uid:
>>       200)
>>       - Use a none root user group: wso2 (gid: 200) and add wso2carbon
>>       user to wso2 group
>>       - Grant required filesystem access to wso2 user group to the
>>       product home directory
>>       - Use wso2 user group (using gid: 200) to provide access to the
>>       volume mounts in runtime:
>>          - On Kubernetes we can use Pod Security Policies to manage
>>          these permissions
>>          - On OpenShift this can be managed using Security Context
>>          Constraints
>>          - On DC/OS volumes can be directly granted to user group
>>          gid:200.
>>
>> Really appreciate your thoughts on this proposal.
>>
>> Thanks
>> Imesh
>>
>> --
>> *Imesh Gunaratne*
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057 <077%20374%202057>
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048 <+94%2071%20635%208048>* <http://wso2.com/>*
>
>
>


-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
_______________________________________________
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to