Ben, thanks for pointing me in right direction. However, after a week, I am still struggling and need help.

The questions you raised are genuine issues which, if managed by openshift will be easy to handle, however if openshift does not manage them, then manually managing them is certainly a difficult task.

Leaving that aside, I have been struggling with running my app on openshift. Here is a list of everything I tried

As suggested by you, I tried to create a volume and run java docker with it. I am getting really lost in variety of issues, here are some:

- unless I login with service:admin user (no password), I am not authorized to mount a volume. - I can only login with service:admin on command line, the UI gives me error. So basically I cannot visually see mounted volumes
- There is no way from UI to create a Volume Claim, I must define a JSON
- I was unable to find any documentation for this JSON and had to copy from other places - After all this, how do I know which volume is being attached to which volume claim?
- I copied mongodb.json and switched image to java.json, this did not work
- I decided, this was too complex, lets just do S2I. However, when I cannot find any documentation how to do it. The example images work but when i try my own node or JEE project, S2I fails. I am guessing it needs some specific files in source to do this. - While PHP project https://github.com/gshipley/simplephp works with S2I with only a php file, when I create a nodejs file, it does not work. I could not find documentation on how to get my node file to run. - I tried to do walkthroughs, but most of them are using openshift online and a command "rhc" that is not available to me.

And all I wanted to do was run one simple command:

docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
openjdk:8-jre-alpine java myClass

ARGGG!! HELP please.



On 8/26/2016 3:24 PM, Ben Parees wrote:


On Fri, Aug 26, 2016 at 6:10 PM, Ravi Kapoor <ravikapoor...@gmail.com
<mailto:ravikapoor...@gmail.com>> wrote:


    Ben,

    Thank you so much for taking the time to explain. This is very helpful.
    If I may, I have a few followup questions:

    > ​That is not a great approach to running code.  It's fine for 
development, but you really want to be producing immutable images that a developer 
can hand to QE has tested it, they can hand that exact same image to prod, and 
there's no risk that pieces have changed.

    Q1: It seems like Lyft uses the approach I was mentioning i.e.
    inject code into dockers rather than copy code inside dockers
    (ref: https://youtu.be/iC2T3gJsB0g?t=595
    <https://youtu.be/iC2T3gJsB0g?t=595>). In this approach there are
    only two elements - the image (which will not change) and the code
    build/tag which will also not change. So what else can change?



Since you're mounting the code from the local filesystem into the
running container, how do you know the code is the same on every machine
that you're running the container on?

If you have 15 nodes in your cluster, what happens when only 14 of them
get the latest code update and the 15th one is still mounting an old file?

Or your admin accidentally copies a dev version of the code to one of
the nodes?

When you look at a running container how do you know what version of the
application it's running, short of inspecting the mounted content?

When you bring a new node online in your cluster, how do you get all the
right code onto that node so all your images (thousands possibly!) are
able to mount what they need when they start up?

Do you put all the code for all your applications on all your nodes so
that you can run any application on any node?  Do you build your own
infrastructure to copy the right code to the right place before starting
an application?  Do you rely on a shared filesystem mounted to all your
nodes to make the code accessible?

These are questions you don't have to answer when the image *is* the
application.
​



    > running things in that way means you need to get both the image and
    your class files into paths on any machine where the image is going
    to be run, and then specify that mount path correctly

    Q2: I would think that openshift has a mechanism to pull files from
    git to a temp folder and way to volume mount that temp folder into
    any container it runs.Volume mounts are very basic feature of
    dockers and I am hoping they are somehow workable with openshift.
    Are they not? Don't we need them for lets say database dockers? Lets
    say a mongodb container is running, it is writing data to a volume
    mounted disk. If container crashes, is openshift able to start a new
    container with previous saved data?



​Openshift does support git-based volumes if you want to go that approach:

https://docs.openshift.org/latest/dev_guide/volumes.html#adding-volumes

i'm not sure whether you can provide git credentials to that volume
definition to handle private git repositories however.
​





    Q3: Even if you disagree, I would still like to know (if nothing
    else then for learning/education) about how to run external images
    with volume mounts and other parameters being passed into the image.
    I am having very hard time finding this.


​https://docs.openshift.org/latest/dev_guide/volumes.html
https://docs.openshift.org/latest/architecture/additional_concepts/storage.html
​




    regards
    Ravi


    On Fri, Aug 26, 2016 at 10:29 AM, Ben Parees <bpar...@redhat.com
    <mailto:bpar...@redhat.com>> wrote:



        On Fri, Aug 26, 2016 at 1:07 PM, Ravi <ravikapoor...@gmail.com
        <mailto:ravikapoor...@gmail.com>> wrote:


            So I am trying to use openshift to manage our dockers.

            First problem I am facing is that most of documentation and
            image templates seem to be about S2I. We are


        ​When it comes to building images, openshift supports basically
        4 approaches, in descending order of recommendation and
        increasing order of flexibility:

        1) s2i (you supply source and pick a builder image, we build a
        new application image and push it somewhere)
        2) docker-type builds (you supply the dockerfile and content, we
        run docker build for you and push the image somewhere)
        3) custom​ (you supply an image, we'll run that image, it can do
        whatever it wants to "build" something and push it somewhere,
        whether that something is an image, jar file, etc)
        4) build your images externally on your own infrastructure and
        just use openshift to run them.

        The first (3) of those are discussed here:
        
https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#builds
        
<https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#builds>
        ​


            considering a continuous builds for multiple projects and
            building an image every 1 hour for multiple projects would
            create total 20GB images every day.


        I'm not sure how this statement relates to s2i.  Do yo have a
        specific concern about s2i with respect to creating these
        images?  Openshift does offer image pruning to help deal with
        the number of images you sound like you'll be creating, if
        you're interested in that.




            Q1: Is this right way of thinking? Since today most
            companies are doing CI, this should be a common problem. Why
            is S2I considered impressive feature?


        ​S2I really has little to do with CI/CD.  S2I is one way to
        produce docker images, there are others as I listed above.  Your
        CI flow is going to be something like:

        1) change source
        2) build that source into an image (in whatever way you want,
        s2i is one mechanism)
        3) test the new image
        4) push the new image into production

        ​The advantages to using s2i are not about how it specifically
        works well with CI, but rather with the advantages it offers
        around building images in a quick, secure, convenient way, as
        described here:

        
https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#source-build
        
<https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#source-build>





            So, I am trying to use off the shelf images and inject
            code/conf into them. I know how to do this from docker
            command line (example: docker run --rm -it -v
            /my/host/folder:/usr/src/myapp -w /usr/src/myapp
            openjdk:8-jre-alpine java myClass )


        ​That is not a great approach to running code.  It's fine for
        development, but you really want to be producing immutable
        images that a developer can hand to QE has tested it, they can
        hand that exact same image to prod, and there's no risk that
        pieces have changed.

        Also running things in that way means you need to get both the
        image and your class files into paths on any machine where the
        image is going to be run, and then specify that mount path
        correctly.  It's not a scalable model.  You want to build
        runnable images, not images that need the application
        side-loaded via a mount.
        ​




            Q2: How do I configure exact same command from openshift? I
            will need to do following steps


        You shouldn't.  Strictly speaking you can, via pod mount
        definitions and hostpath volume definitions, but it's not the
        right way to think about creating and running images in a
        clustered environment.​



            1. Jenkins is pushing compiled jar files to git repository.
            First step will be to pull the files down.
            2. I may have to unzip some files (in case it is bunch of
            configurations etc.)
            3. Openshift should use docker run to create containers.


        ​Assuming you want to continue building jars via jenkins and
        pushing them somewhere (doesn't have to be git), i'd suggest the
        following flow:

        1) jenkins builds jar and pushes it somewhere
        2) an s2i(or docker) build in openshift pulls from that
        somewhere (either it's pulling the git source that includes the
        jar, or you can write your own assemble script which pulls the
        jar from a nexus repo(or a dockerfile which does so) or some
        other location.  This is discussed here:
        
https://docs.openshift.org/latest/dev_guide/builds.html#using-external-artifacts
        
<https://docs.openshift.org/latest/dev_guide/builds.html#using-external-artifacts>)

        You can also do binary builds which don't require you put the
        content in a git repo:
        https://docs.openshift.org/latest/dev_guide/builds.html#binary-source
        <https://docs.openshift.org/latest/dev_guide/builds.html#binary-source>

        in which case the jenkins job would build the jar locally and
        then invoke "oc start-build yourbuild --from-file your.jar"


        ​3) the image will get pushed to a docker registry as part of
        the build
        4) the image gets deployed on openshift, it is fully
        self-contained and does not need any external mounts.  It can
        scale up and move between host nodes without any adminstrative
        maintenance.

        Hope that helps.





            thanks so much for help
            Ravi

            _______________________________________________
            users mailing list
            users@lists.openshift.redhat.com
            <mailto:users@lists.openshift.redhat.com>
            http://lists.openshift.redhat.com/openshiftmm/listinfo/users
            <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>




        --
        Ben Parees | OpenShift





--
Ben Parees | OpenShift


_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to