RE: Image Registry on bare metal install?
Hi Ben, I found the issue! I had already provisioned NFS storage, but had missed a bit of config: Setting the empty storage/pvc/claim configs.imageregistry.operator.openshift.io storage: pvc: claim: It’s a little odd, just adding this alone, the config wouldn’t save (rather the change was not recognised). I had to revert “managementState: Managed” to ‘Removed‘, and then back to Managed (together with storage/pcv/claim) before it saved. Thanks! Ravi From: Ben Parees Sent: 06 September 2020 21:27 To: Ravi Patel Cc: users@lists.openshift.redhat.com Subject: Re: Image Registry on bare metal install? On Sun, Sep 6, 2020 at 11:28 AM Ravi Patel mailto:ravi.pa...@sabrex.com>> wrote: Hi! I’ve installed OCP 4.5 (updated to 4.7) as a bare-metal install on a single ESX server. All seems well. I’m now trying to deploy a package, which requires a route to the Image Registry. If I understand the documentation correctly, the Image Registry is not installed by default on bare-metal installs. Are there any guides on what I need to do to install & expose the registry? exposing the registry via a route: https://docs.openshift.com/container-platform/4.5/registry/securing-exposing-registry.html storage configuration: https://docs.openshift.com/container-platform/4.5/registry/configuring_registry_storage/configuring-registry-storage-baremetal.html That doc covers configuring the registry to use a PVC (what volume backs that PVC will be up to you and your cluster, but it needs to be something that can be mounted read/write/many so that multiple nodes can mount it. Which usually does mean NFS). But you can also use s3 or GCS as your registry storage. Configuration guides can be found here: https://docs.openshift.com/container-platform/4.5/registry/configuring_registry_storage/configuring-registry-storage-aws-user-infrastructure.html https://docs.openshift.com/container-platform/4.5/registry/configuring_registry_storage/configuring-registry-storage-gcp-user-infrastructure.html https://docs.openshift.com/container-platform/4.5/registry/configuring_registry_storage/configuring-registry-storage-azure-user-infrastructure.html Do I need to provide object-storage somehow (connect to S3?) or can it use my existing NFS? Either, as long as you've made your NFS storage mountable as a PV in your cluster. Thanks, Ravi K Patel SabreX Ltd. - Registered in England and Wales with number 12181080. Registered office: Ossington Chambers | 6-8 Castle Gate | Newark | NG24 1AX | UK The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient or you have received this message in error, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful. Further, please send it back to us immediately and delete it. ___ users mailing list users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com> http://lists.openshift.redhat.com/openshiftmm/listinfo/users -- Ben Parees | OpenShift SabreX Ltd. - Registered in England and Wales with number 12181080. Registered office: Ossington Chambers | 6-8 Castle Gate | Newark | NG24 1AX | UK The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient or you have received this message in error, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful. Further, please send it back to us immediately and delete it. ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Image Registry on bare metal install?
Hi! I've installed OCP 4.5 (updated to 4.7) as a bare-metal install on a single ESX server. All seems well. I'm now trying to deploy a package, which requires a route to the Image Registry. If I understand the documentation correctly, the Image Registry is not installed by default on bare-metal installs. Are there any guides on what I need to do to install & expose the registry? Do I need to provide object-storage somehow (connect to S3?) or can it use my existing NFS? Thanks, Ravi K Patel SabreX Ltd. - Registered in England and Wales with number 12181080. Registered office: Ossington Chambers | 6-8 Castle Gate | Newark | NG24 1AX | UK The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient or you have received this message in error, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful. Further, please send it back to us immediately and delete it. ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: error during install: subnet id does not exist
It only asks for VPC Subnet id. I have set that and that is giving trouble. No place to put VPC id itself. On 11/17/2016 1:06 PM, Alex Wauck wrote: On Thu, Nov 17, 2016 at 2:52 PM, Ravi Kapoor <ravikapoor...@gmail.com <mailto:ravikapoor...@gmail.com>> wrote: The instructions do not ask for availability zone or VPC. They only ask for a subnet and I have specified that. Maybe it is picking some other VPC where the subnet is not available. Have you read this? https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md According to that document, you are supposed to specify a whole bunch of stuff in environment variables, including the VPC. I've never tried it myself, so I'm not sure how well it works. ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: error during install: subnet id does not exist
> Are you using openshift-ansible's AWS support to create EC2 instances for you? We create our instances by other means and then run openshift-ansible on them using the BYO playbooks, I am not opposed to it, just that I am a beginner, trying to get something up and running. I can create instances manually and run ansible on it, but not able to find instructions. Openshift's "advanced install" instructions are way too advanced. I have a single node openshift working, but to add a node, instructions only point to ansible (oadm does not have a command). So I am thinking fastest way to a working cluster (with add node possibilities) is to use ansible, hence this path. > Do you have the availability zone or VPC set in your inventory file? If so, does it match the subnet you specified? The instructions do not ask for availability zone or VPC. They only ask for a subnet and I have specified that. Maybe it is picking some other VPC where the subnet is not available. On Thu, Nov 17, 2016 at 11:19 AM, Alex Wauck <alexwa...@exosite.com> wrote: > > > On Thu, Nov 17, 2016 at 12:09 PM, Ravi Kapoor <ravikapoor...@gmail.com> > wrote: > > Question1: Is this best way to install? So far I have been using "oc >> cluster up" while it works it crashes once in a while (at least UI crashes, >> so I am forced to restart it which kills all pods) >> > > We used openshift-ansible to install our OpenShift cluster, and we fairly > regularly use it to create temporary clusters for testing purposes. I > would consider it the best way to install. > > >> Question2: >> After I did all the configurations, my install still fails with following >> error: >> >> exception occurred during task execution. To see the full traceback, use >> -vvv. The error was: > >InvalidSubnetID.NotFoundThe subnet ID 'subnet-c7372dfd' >> does not exist2b4d4256-7204- >> 4ced-9af3-318d86a759f0 >> > > Are you using openshift-ansible's AWS support to create EC2 instances for > you? We create our instances by other means and then run openshift-ansible > on them using the BYO playbooks, so I'm not familiar with > openshift-ansible's AWS support. Do you have the availability zone or VPC > set in your inventory file? If so, does it match the subnet you specified? > > > > -- > > Alex Wauck // DevOps Engineer > > *E X O S I T E* > *www.exosite.com <http://www.exosite.com/>* > > Making Machines More Human. > > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
error during install: subnet id does not exist
I am trying to install openshift using instructions at https://github.com/openshift/openshift-ansible Question1: Is this best way to install? So far I have been using "oc cluster up" while it works it crashes once in a while (at least UI crashes, so I am forced to restart it which kills all pods) Question2: After I did all the configurations, my install still fails with following error: exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidSubnetID.NotFoundThe subnet ID 'subnet-c7372dfd' does not exist2b4d4256-7204-4ced-9af3-318d86a759f0 The subnet id is correct, here is a screenshot. [image: Inline image 1] Thanks for any help. ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Unable to start openshift in VM, AWS or google cloud
I am not able to start openshift, I tried three different ways. 1. Windows 7 + Virtual Box + Ubuntu oc cluster up works well. I went to console and launched nodejs-ex example. Console shows it is up, however when I click on route, it says "unable to connect". I tried going directly to POD's IP address and it does work. In other words, somehow load balancer was failing in virtualbox Ubuntu VM. 2. Then I moved on to AWS. I launched a RedHat image and installed docker and started openshift. Here, OC starts on private IP address, so I am not able to access it from public internet. I even tried oc cluster up --public-hostname='my ip address' but since the public ip address is some magic, oc is not able to detect etcd etc and fails. 3. Then I tried on google cloud. I faced exactly same issue as AWS. If any one of them works, I will be ok but no idea how to get past these issues. ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
how can I force rolling deployment without any change in deployment config
I created a deployment config and it is working fine. However I made a few changes in a file mounted using persistent volume. Somehow the changed file is not being loaded and old file is being used. I found that to load the modified file, I need to do a new deployment. I also found that unless I change somethign in my deploymentconfig.yaml, the rollign deployment is not triggered. So my temporary solution was to change the file name on disk and reflect the same in deplymentconfig and trigger use "oc replace -f deploymentConfig.xml". While this solves the problem, renaming the file on every deployment is way too risky I tried inserting a pseudo element in YAML file as "buildnumber: 1", however changing this parameter does trigger the deployment either. Is there a way for me to force reload the file from persistent volume or force deployment without renaming the file? ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: Pod does not have Scale up/down buttons
Once more, now with JSON { "kind": "List", "apiVersion": "v1beta3", "metadata": {}, "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "labels": { "name": "node-test" }, "name": "node-test" }, "spec": { "containers": [ { "image": "node:4.4.7", "imagePullPolicy": "IfNotPresent", "name": "node-test", "command": [ "node" ], "args": [ "/usr/src/app/server.js" ], "ports": [ { "containerPort": 8080, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "/usr/src/app", "name": "myclaim2" } ], "securityContext": { "capabilities": {}, "privileged": false }, "terminationMessagePath": "/dev/termination-log" } ], "volumes": [ { "name": "myclaim2", "persistentVolumeClaim": { "claimName": "myclaim2" } } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "serviceAccount": "" }, "status": {} }, { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": null, "name": "node-service" }, "spec": { "portalIP": "", "ports": [ { "name": "web", "port": 8080, "protocol": "TCP" } ], "selector": { "name": "node-test" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } }, { "apiVersion": "v1", "kind": "Route", "metadata": { "annotations": {}, "name": "node-route" }, "spec": { "to": { "name": "node-service" } } } ] } On Mon, Sep 19, 2016 at 2:19 PM, Ravi Kapoor <ravikapoor...@gmail.com> wrote: > > I created following job definition. It successfully creates a service, pod > and a route. I am able to access the website. > > It shows 1 Pod running, however, there are no scale up/down buttons in the > UI. > How can I scale this application up? > > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: few basic questions about S2I and docker run
Hi Ben, I am finally able to run my nodejs code on openshift with both approaches (volume mount as well as S2I) I was also able to resolve most of other issues I mentioned and was able to run JEE application as well. Thanks a lot for helping me through all the silly questions. Good news is that now my company will be using openshift to manage our dockers/deployments. regards On Sat, Sep 10, 2016 at 8:23 AM, Ben Parees <bpar...@redhat.com> wrote: > you can define a command on the container within the pod: > http://kubernetes.io/docs/user-guide/configuring-containers/#launching-a- > container-using-a-configuration-file > > > On Fri, Sep 9, 2016 at 5:21 PM, Ravi <ravikapoor...@gmail.com> wrote: > >> >> Thank you for this help. >> >> I was trying nginx because after invoking container, I do not have to run >> a command. For java or node, after the container is run I will need to run >> a command e.g. >> >> java -jar myapp.jar >> OR >> node server.js >> >> Can you guide me how to add this to the json file or point me to >> documentation so I can try this? >> >> thanks so much >> >> >> On 9/8/2016 6:56 PM, Ben Parees wrote: >> >>> Downloads$ oc get pods >>> NAME READY STATUSRESTARTS AGE >>> nginx-1-deploy 1/1 Running 0 14s >>> nginx-1-rmfl90/1 Error 0 11s >>> >>> Downloads$ oc logs nginx-1-rmfl9 >>> 2016/09/09 01:54:21 [warn] 1#1: the "user" directive makes sense only if >>> the master process runs with super-user privileges, ignored in >>> /etc/nginx/nginx.conf:2 >>> nginx: [warn] the "user" directive makes sense only if the master >>> process runs with super-user privileges, ignored in >>> /etc/nginx/nginx.conf:2 >>> 2016/09/09 01:54:21 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" >>> failed (13: Permission denied) >>> nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: >>> Permission denied) >>> >>> >>> the nginx image probably only works when run as root or as some other >>> specific user. when images are run in openshift, by default they are >>> assigned a random uid for security purposes. that can cause issues with >>> images that expect to run as a specific user. please see our >>> documentation: >>> >>> https://docs.openshift.org/latest/creating_images/guidelines >>> .html#openshift-origin-specific-guidelines >>> (section on support arbitrary uids) >>> >>> to relax the restriction, see: >>> https://docs.openshift.org/latest/admin_guide/manage_scc.htm >>> l#enable-images-to-run-with-user-in-the-dockerfile >>> >>> >>> >>> >>> >>> On Thu, Sep 8, 2016 at 9:50 PM, Ravi <ravikapoor...@gmail.com >>> <mailto:ravikapoor...@gmail.com>> wrote: >>> >>> >>> oh, forgot to add, I do not have any readiness probe. >>> >>> On 9/8/2016 6:47 PM, Ravi Kapoor wrote: >>> >>> I removed volumes, pod still failed. json and logs attached >>> >>> >>> >>> On Thu, Sep 8, 2016 at 6:35 PM, Ben Parees <bpar...@redhat.com >>> <mailto:bpar...@redhat.com> >>> <mailto:bpar...@redhat.com <mailto:bpar...@redhat.com>>> wrote: >>> >>> though i don't see it in your json it sounds like you have a >>> readiness probe defined on your pod and it's not being met >>> successfully. >>> >>> the other possibility is it has to do w/ your mounts. can >>> you >>> temporarily remove the volume mounts and see if the pod >>> comes up? >>> >>> >>> On Thu, Sep 8, 2016 at 8:33 PM, Ravi Kapoor >>> <ravikapoor...@gmail.com <mailto:ravikapoor...@gmail.com> >>> <mailto:ravikapoor...@gmail.com >>> <mailto:ravikapoor...@gmail.com>>> wrote: >>> >>> Pod deployment failed. error in console log is >>> >>> --> Scaling nginx-1 to 1 >>> --> Waiting up to 10m0s for pods in deployment nginx-1 >>> to become >>> ready >>> error: update acceptor rejected nginx-1: pods for >>> deployment >>> "nginx-1" took longer t
Re: few basic questions about S2I and docker run
Ben, You have been very helpful. I am sincerely thankful. > I still think you'll get more mileage by trying to use the system as it > was designed to be used(build an image with your compiled source built > in) instead of trying to force a different workflow onto it. I understand and agree. Accordingly I need to working on 2 step solution 1. First step is to get my dockers up and running in a day or two. Considering how long it is taking me to understand the system, I want to do it the short way first. I.e. able to run following command from within openshift "docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp openjdk:8-jre-alpine java myClass" This means - download jars/files from source control to a host folder foobar - mount the host folder foobar, that has my class files/jars into a java docker at /usr/src/myapp - run java docker (along with -w flag) 2. Once I can get system up, I will continue to understand how to make S2I operational and switch to it once I have enough confidence. I am struggling with the fact that running php through S2I seems to be straightforward. No special config etc is needed. However to run Java or Node code, the repo should have particular images or package.json etc. and so far I am not able to understand what I need to add to the repo to make it S2I compatible. In other words, if I create one php file, put in a repo, mention the git url in S2I, it works. If I create a single node file or java file or even jboss example git (https://github.com/jboss-developer/jboss-eap-quickstarts) in S2I, the build fails. I hope that makes sense. Regards On 9/5/2016 7:33 PM, Ben Parees wrote: On Fri, Sep 2, 2016 at 4:27 PM, Ravi <ravikapoor...@gmail.com <mailto:ravikapoor...@gmail.com>> wrote: Ben, thanks for pointing me in right direction. However, after a week, I am still struggling and need help. The questions you raised are genuine issues which, if managed by openshift will be easy to handle, however if openshift does not manage them, then manually managing them is certainly a difficult task. Leaving that aside, I have been struggling with running my app on openshift. Here is a list of everything I tried As suggested by you, I tried to create a volume and run java docker with it. I am getting really lost in variety of issues, here are some: I still think you'll get more mileage by trying to use the system as it was designed to be used(build an image with your compiled source built in) instead of trying to force a different workflow onto it. - unless I login with service:admin user (no password), I am not authorized to mount a volume. what type of volume? what do you mean by "mount a volume"? what commands are you running? how is your pod or deployment config defined? - I can only login with service:admin on command line, the UI gives me error. So basically I cannot visually see mounted volumes - There is no way from UI to create a Volume Claim, I must define a JSON - I was unable to find any documentation for this JSON and had to copy from other places you can use "oc set volumes" to add volume claims to a deployment config, once you have (as an administrator) defined persistent volumes in your cluster. you can also "attach storage" to a deployment config from within the openshift console, but that does not apply to your scenario since you are trying to mount a "specific" volume into your pod instead of just requesting persistent storage. - After all this, how do I know which volume is being attached to which volume claim? you aren't supposed to care. You ask for persistent storage, the system finds persistent storage to meet those needs, and you use it. If you're trying to set up a specific persistent volume definition with existing content, and then ensure that particular PV gets assigned to your Pod then you don't use a PVC, you just reference the volume directly in the Pod definition as with the git repo volume example. - I copied mongodb.json and switched image to java.json, this did not work - I decided, this was too complex, lets just do S2I. However, when I cannot find any documentation how to do it. The example images work but when i try my own node or JEE project, S2I fails. I am guessing it needs some specific files in source to do this. - While PHP project https://github.com/gshipley/simplephp <https://github.com/gshipley/simplephp> works with S2I with only a php file, when I create a nodejs file, it does not work. I could not find documentation on how to get my node file to run. https://github.com/openshift/nodejs-ex <https://github.com/openshift/nodejs-ex> https://docs.openshift.org/latest/using_images/s2i_images/nodejs.html <https://docs.openshift.org/latest/using_images/s2
Re: few basic questions about S2I and docker run
Ben, thanks for pointing me in right direction. However, after a week, I am still struggling and need help. The questions you raised are genuine issues which, if managed by openshift will be easy to handle, however if openshift does not manage them, then manually managing them is certainly a difficult task. Leaving that aside, I have been struggling with running my app on openshift. Here is a list of everything I tried As suggested by you, I tried to create a volume and run java docker with it. I am getting really lost in variety of issues, here are some: - unless I login with service:admin user (no password), I am not authorized to mount a volume. - I can only login with service:admin on command line, the UI gives me error. So basically I cannot visually see mounted volumes - There is no way from UI to create a Volume Claim, I must define a JSON - I was unable to find any documentation for this JSON and had to copy from other places - After all this, how do I know which volume is being attached to which volume claim? - I copied mongodb.json and switched image to java.json, this did not work - I decided, this was too complex, lets just do S2I. However, when I cannot find any documentation how to do it. The example images work but when i try my own node or JEE project, S2I fails. I am guessing it needs some specific files in source to do this. - While PHP project https://github.com/gshipley/simplephp works with S2I with only a php file, when I create a nodejs file, it does not work. I could not find documentation on how to get my node file to run. - I tried to do walkthroughs, but most of them are using openshift online and a command "rhc" that is not available to me. And all I wanted to do was run one simple command: docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp openjdk:8-jre-alpine java myClass ARGGG!! HELP please. On 8/26/2016 3:24 PM, Ben Parees wrote: On Fri, Aug 26, 2016 at 6:10 PM, Ravi Kapoor <ravikapoor...@gmail.com <mailto:ravikapoor...@gmail.com>> wrote: Ben, Thank you so much for taking the time to explain. This is very helpful. If I may, I have a few followup questions: > That is not a great approach to running code. It's fine for development, but you really want to be producing immutable images that a developer can hand to QE has tested it, they can hand that exact same image to prod, and there's no risk that pieces have changed. Q1: It seems like Lyft uses the approach I was mentioning i.e. inject code into dockers rather than copy code inside dockers (ref: https://youtu.be/iC2T3gJsB0g?t=595 <https://youtu.be/iC2T3gJsB0g?t=595>). In this approach there are only two elements - the image (which will not change) and the code build/tag which will also not change. So what else can change? Since you're mounting the code from the local filesystem into the running container, how do you know the code is the same on every machine that you're running the container on? If you have 15 nodes in your cluster, what happens when only 14 of them get the latest code update and the 15th one is still mounting an old file? Or your admin accidentally copies a dev version of the code to one of the nodes? When you look at a running container how do you know what version of the application it's running, short of inspecting the mounted content? When you bring a new node online in your cluster, how do you get all the right code onto that node so all your images (thousands possibly!) are able to mount what they need when they start up? Do you put all the code for all your applications on all your nodes so that you can run any application on any node? Do you build your own infrastructure to copy the right code to the right place before starting an application? Do you rely on a shared filesystem mounted to all your nodes to make the code accessible? These are questions you don't have to answer when the image *is* the application. > running things in that way means you need to get both the image and your class files into paths on any machine where the image is going to be run, and then specify that mount path correctly Q2: I would think that openshift has a mechanism to pull files from git to a temp folder and way to volume mount that temp folder into any container it runs.Volume mounts are very basic feature of dockers and I am hoping they are somehow workable with openshift. Are they not? Don't we need them for lets say database dockers? Lets say a mongodb container is running, it is writing data to a volume mounted disk. If container crashes, is openshift able to start a new container with previous saved data? Openshift does support git-based volumes if you want to go that approach: https://docs.openshift.org/latest/dev_guide/volumes.html#adding-volumes i'm not sure whether you can provide git credentials t
Re: few basic questions about S2I and docker run
Ben, Thank you so much for taking the time to explain. This is very helpful. If I may, I have a few followup questions: > That is not a great approach to running code. It's fine for development, but you really want to be producing immutable images that a developer can hand to QE has tested it, they can hand that exact same image to prod, and there's no risk that pieces have changed. Q1: It seems like Lyft uses the approach I was mentioning i.e. inject code into dockers rather than copy code inside dockers (ref: https://youtu.be/iC2T3gJsB0g?t=595). In this approach there are only two elements - the image (which will not change) and the code build/tag which will also not change. So what else can change? > running things in that way means you need to get both the image and your class files into paths on any machine where the image is going to be run, and then specify that mount path correctly Q2: I would think that openshift has a mechanism to pull files from git to a temp folder and way to volume mount that temp folder into any container it runs.Volume mounts are very basic feature of dockers and I am hoping they are somehow workable with openshift. Are they not? Don't we need them for lets say database dockers? Lets say a mongodb container is running, it is writing data to a volume mounted disk. If container crashes, is openshift able to start a new container with previous saved data? Q3: Even if you disagree, I would still like to know (if nothing else then for learning/education) about how to run external images with volume mounts and other parameters being passed into the image. I am having very hard time finding this. regards Ravi On Fri, Aug 26, 2016 at 10:29 AM, Ben Parees <bpar...@redhat.com> wrote: > > > On Fri, Aug 26, 2016 at 1:07 PM, Ravi <ravikapoor...@gmail.com> wrote: > >> >> So I am trying to use openshift to manage our dockers. >> >> First problem I am facing is that most of documentation and image >> templates seem to be about S2I. We are > > > When it comes to building images, openshift supports basically 4 > approaches, in descending order of recommendation and increasing order of > flexibility: > > 1) s2i (you supply source and pick a builder image, we build a new > application image and push it somewhere) > 2) docker-type builds (you supply the dockerfile and content, we run > docker build for you and push the image somewhere) > 3) custom (you supply an image, we'll run that image, it can do whatever > it wants to "build" something and push it somewhere, whether that something > is an image, jar file, etc) > 4) build your images externally on your own infrastructure and just use > openshift to run them. > > The first (3) of those are discussed here: > https://docs.openshift.org/latest/architecture/core_ > concepts/builds_and_image_streams.html#builds > > > >> considering a continuous builds for multiple projects and building an >> image every 1 hour for multiple projects would create total 20GB images >> every day. >> > > I'm not sure how this statement relates to s2i. Do yo have a specific > concern about s2i with respect to creating these images? Openshift does > offer image pruning to help deal with the number of images you sound like > you'll be creating, if you're interested in that. > > > >> >> Q1: Is this right way of thinking? Since today most companies are doing >> CI, this should be a common problem. Why is S2I considered impressive >> feature? >> > > S2I really has little to do with CI/CD. S2I is one way to produce docker > images, there are others as I listed above. Your CI flow is going to be > something like: > > 1) change source > 2) build that source into an image (in whatever way you want, s2i is one > mechanism) > 3) test the new image > 4) push the new image into production > > The advantages to using s2i are not about how it specifically works well > with CI, but rather with the advantages it offers around building images in > a quick, secure, convenient way, as described here: > > https://docs.openshift.org/latest/architecture/core_ > concepts/builds_and_image_streams.html#source-build > > > > >> >> So, I am trying to use off the shelf images and inject code/conf into >> them. I know how to do this from docker command line (example: docker run >> --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp >> openjdk:8-jre-alpine java myClass ) >> > > That is not a great approach to running code. It's fine for development, > but you really want to be producing immutable images that a developer can > hand to QE has tested it, they can hand that exact same image to prod, and > there's no risk that pieces have chang
few basic questions about S2I and docker run
So I am trying to use openshift to manage our dockers. First problem I am facing is that most of documentation and image templates seem to be about S2I. We are considering a continuous builds for multiple projects and building an image every 1 hour for multiple projects would create total 20GB images every day. Q1: Is this right way of thinking? Since today most companies are doing CI, this should be a common problem. Why is S2I considered impressive feature? So, I am trying to use off the shelf images and inject code/conf into them. I know how to do this from docker command line (example: docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp openjdk:8-jre-alpine java myClass ) Q2: How do I configure exact same command from openshift? I will need to do following steps 1. Jenkins is pushing compiled jar files to git repository. First step will be to pull the files down. 2. I may have to unzip some files (in case it is bunch of configurations etc.) 3. Openshift should use docker run to create containers. thanks so much for help Ravi ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: no items in "browse catalog" page
Thank you so much Cesar. Looks like that is exactly what I was facing. As commented in the defect log, I tried oc cluster up --version=v1.3.0-alpha.2 and that also works. P.S.: if any developers are here, I would suggest switch openshift docker distribution to start "oc cluster up" instead of "openshift start". I was unable to get docker to show default images (which are setup by oc cluster up, as jessica mentioned) and was also unable to find a way to do "oc cluster up" after docker started. i got error "openshift is already up". i tried "oc cluster down" which killed docker. i also tried building my own dockerfile with different entry point but for some reason that also did not work. On Wed, Aug 17, 2016 at 1:03 PM, Cesar Wong <cew...@redhat.com> wrote: > Hi Ravi, > > What distro are you using? > > You may be hitting this issue: https://github.com/ > openshift/origin/issues/10215 > > A simple workaround is to use an earlier version of the OpenShift images: > Stop your current cluster with 'oc cluster down'. Then bring it back up > with 'oc cluster up --version=v1.2.0' > > > On Wed, Aug 17, 2016 at 2:54 PM, Ravi Kapoor <ravikapoor...@gmail.com> > wrote: > > Unfortunately I am no familiar enough to follow instructions such as "giving > the API server to use the public key to verify tokens" > Also this was more than a year ago, how come out of box version is > unstable enough that it doesn't work on its own without fixes? > Are there detailed instructions on how to get it up and running? > > > On Mon, Aug 15, 2016 at 10:49 PM, Jonathan Yu <jaw...@redhat.com> wrote: > >> Hey Ravi, >> >> On Mon, Aug 15, 2016 at 8:35 PM, Ravi Kapoor <ravikapoor...@gmail.com> >> wrote: >> >>> Thanks Jessica, that was helpful. I had to do following >>> >>> 1. oc cluster up >>> it gave error >>> Error: did not detect an --insecure-registry argument on the Docker >>> daemon >>> Solution: >>> >>> Ensure that the Docker daemon is running with the following argument: >>> --insecure-registry 172.30.0.0/16 >>> >> >> You will need to modify your /etc/sysconfig/docker file (on Fedora and I >> think also CentOS and RHEL) to add this flag. Other OSes will have these >> flags potentially stored elsewhere, like /etc/default on Debian and Ubuntu. >> >>> >>> 2. Google search reveals too many places where this should be edited. >>> >> >> Yes, unfortunately different distros do things differently as there's no >> "standard" approach. I would suggest adding your distro name to your Google >> searches to get the most relevant results. >> >> For me, I had to add it to ExecStart in file /etc/systemd/system/docker. >>> service >>> >>> Now I do see images as shown in walkthrough >>> https://www.youtube.com/watch?v=yFPYGeKwmpk >>> >>> 3. However as I follow the tutorial, at time 6:30 ( >>> https://youtu.be/yFPYGeKwmpk?t=390), it fails to deploy. Log shows >>> following error >>> error: cannot connect to the server: open /var/run/secrets/kubernetes. >>> io/serviceaccount/token: no such file or directory >>> >> >> Hmm, this looks interesting. Unfortunately I'm not an expert so I'm not >> sure where to look for this. A quick search for the error message yields >> this bug: https://github.com/kubernetes/kubernetes/issues/10620 - does >> anything in there help? >> >>> >>> Once again, I find too many suggestions to how this can be fixed. I am >>> afraid trying all those will take me days. >>> One of the pages said this has been fixed in OS 1.3, however I am >>> running OS 1.3 >>> >>> thanks so much >>> >>> >>> >>> On Mon, Aug 15, 2016 at 4:33 PM, Jessica Forrester <jforr...@redhat.com> >>> wrote: >>> >>>> So if you are just running a local dev cluster to try things out, I >>>> recommend using 'oc cluster up' instead, it will set up a lot for you, >>>> including all the example image streams and templates. >>>> >>>> If you want to add them in an existing cluster see step 10 here >>>> https://github.com/openshift/origin/blob/master/ >>>> CONTRIBUTING.adoc#develop-on-virtual-machine-using-vagrant >>>> >>>> Where the files it is referring to are in the origin repo under >>>> examples. >>>> >>>> On Mon, Aug 15, 2016 at 7:09 PM, Ravi Kapoor <ravikapoor...@gmail.com> >&
Re: no items in "browse catalog" page
Unfortunately I am no familiar enough to follow instructions such as "giving the API server to use the public key to verify tokens" Also this was more than a year ago, how come out of box version is unstable enough that it doesn't work on its own without fixes? Are there detailed instructions on how to get it up and running? On Mon, Aug 15, 2016 at 10:49 PM, Jonathan Yu <jaw...@redhat.com> wrote: > Hey Ravi, > > On Mon, Aug 15, 2016 at 8:35 PM, Ravi Kapoor <ravikapoor...@gmail.com> > wrote: > >> Thanks Jessica, that was helpful. I had to do following >> >> 1. oc cluster up >> it gave error >> Error: did not detect an --insecure-registry argument on the Docker >> daemon >>Solution: >> >> Ensure that the Docker daemon is running with the following argument: >> --insecure-registry 172.30.0.0/16 >> > > You will need to modify your /etc/sysconfig/docker file (on Fedora and I > think also CentOS and RHEL) to add this flag. Other OSes will have these > flags potentially stored elsewhere, like /etc/default on Debian and Ubuntu. > >> >> 2. Google search reveals too many places where this should be edited. >> > > Yes, unfortunately different distros do things differently as there's no > "standard" approach. I would suggest adding your distro name to your Google > searches to get the most relevant results. > > >> For me, I had to add it to ExecStart in file >> /etc/systemd/system/docker.service >> >> Now I do see images as shown in walkthrough https://www.youtub >> e.com/watch?v=yFPYGeKwmpk >> >> 3. However as I follow the tutorial, at time 6:30 ( >> https://youtu.be/yFPYGeKwmpk?t=390), it fails to deploy. Log shows >> following error >> error: cannot connect to the server: open /var/run/secrets/kubernetes.io >> /serviceaccount/token: no such file or directory >> > > Hmm, this looks interesting. Unfortunately I'm not an expert so I'm not > sure where to look for this. A quick search for the error message yields > this bug: https://github.com/kubernetes/kubernetes/issues/10620 - does > anything in there help? > >> >> Once again, I find too many suggestions to how this can be fixed. I am >> afraid trying all those will take me days. >> One of the pages said this has been fixed in OS 1.3, however I am running >> OS 1.3 >> >> thanks so much >> >> >> >> On Mon, Aug 15, 2016 at 4:33 PM, Jessica Forrester <jforr...@redhat.com> >> wrote: >> >>> So if you are just running a local dev cluster to try things out, I >>> recommend using 'oc cluster up' instead, it will set up a lot for you, >>> including all the example image streams and templates. >>> >>> If you want to add them in an existing cluster see step 10 here >>> https://github.com/openshift/origin/blob/master/CONTRIB >>> UTING.adoc#develop-on-virtual-machine-using-vagrant >>> >>> Where the files it is referring to are in the origin repo under examples. >>> >>> On Mon, Aug 15, 2016 at 7:09 PM, Ravi Kapoor <ravikapoor...@gmail.com> >>> wrote: >>> >>>> >>>> I am a newbie, so excuse my ignorance. I have tried for 2 days now to >>>> get "browse catalog" page to show me catalog. I do not see any errors nor >>>> images. >>>> Due to this I am not able to follow any tutorials. >>>> thanks for helping >>>> >>>> >>>> This is what my page looks like (text copy in case image attachments >>>> are not allowed) >>>> >>>> No images or templates. >>>> >>>> No images or templates are loaded for this project or the shared >>>> openshift namespace. An image or template is required to add content. >>>> >>>> To add an image stream or template from a file, use the editor in the >>>> Import YAML / JSON tab, or run the following command: >>>> >>>> oc create -f -n test >>>> Back to overview >>>> >>>> >>>> Here is a screenshot >>>> >>>> [image: Inline image 1] >>>> >>>> ___ >>>> users mailing list >>>> users@lists.openshift.redhat.com >>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users >>>> >>>> >>> >> >> ___ >> u
Re: no items in "browse catalog" page
Thanks Jessica, that was helpful. I had to do following 1. oc cluster up it gave error Error: did not detect an --insecure-registry argument on the Docker daemon Solution: Ensure that the Docker daemon is running with the following argument: --insecure-registry 172.30.0.0/16 2. Google search reveals too many places where this should be edited. For me, I had to add it to ExecStart in file /etc/systemd/system/docker.service Now I do see images as shown in walkthrough https://www.youtube.com/watch?v=yFPYGeKwmpk 3. However as I follow the tutorial, at time 6:30 ( https://youtu.be/yFPYGeKwmpk?t=390), it fails to deploy. Log shows following error error: cannot connect to the server: open /var/run/secrets/ kubernetes.io/serviceaccount/token: no such file or directory Once again, I find too many suggestions to how this can be fixed. I am afraid trying all those will take me days. One of the pages said this has been fixed in OS 1.3, however I am running OS 1.3 thanks so much On Mon, Aug 15, 2016 at 4:33 PM, Jessica Forrester <jforr...@redhat.com> wrote: > So if you are just running a local dev cluster to try things out, I > recommend using 'oc cluster up' instead, it will set up a lot for you, > including all the example image streams and templates. > > If you want to add them in an existing cluster see step 10 here > https://github.com/openshift/origin/blob/master/ > CONTRIBUTING.adoc#develop-on-virtual-machine-using-vagrant > > Where the files it is referring to are in the origin repo under examples. > > On Mon, Aug 15, 2016 at 7:09 PM, Ravi Kapoor <ravikapoor...@gmail.com> > wrote: > >> >> I am a newbie, so excuse my ignorance. I have tried for 2 days now to get >> "browse catalog" page to show me catalog. I do not see any errors nor >> images. >> Due to this I am not able to follow any tutorials. >> thanks for helping >> >> >> This is what my page looks like (text copy in case image attachments are >> not allowed) >> >> No images or templates. >> >> No images or templates are loaded for this project or the shared >> openshift namespace. An image or template is required to add content. >> >> To add an image stream or template from a file, use the editor in the >> Import YAML / JSON tab, or run the following command: >> >> oc create -f -n test >> Back to overview >> >> >> Here is a screenshot >> >> [image: Inline image 1] >> >> ___ >> users mailing list >> users@lists.openshift.redhat.com >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users >> >> > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
no items in "browse catalog" page
I am a newbie, so excuse my ignorance. I have tried for 2 days now to get "browse catalog" page to show me catalog. I do not see any errors nor images. Due to this I am not able to follow any tutorials. thanks for helping This is what my page looks like (text copy in case image attachments are not allowed) No images or templates. No images or templates are loaded for this project or the shared openshift namespace. An image or template is required to add content. To add an image stream or template from a file, use the editor in the Import YAML / JSON tab, or run the following command: oc create -f -n test Back to overview Here is a screenshot [image: Inline image 1] ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users