Any tool to convert DeploymentConfig to Deployment

2019-08-27 Thread Cameron Braid
I have a bunch of DeploymentConfig resources I need to convert
to Deployment resources.

Does anyone know of a tool to do this conversion?

I realise that there are some features that aren't convertable (triggers,
image streams etc..) but I don't need these features

Cameron
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


skydns i/o timeout

2019-01-30 Thread Cameron Braid
Hi,

I am having some DNS issues in my origin 3.7 cluster

I am seeing occasional dns lookup delays of around 1 to 5 seconds every 5 -
10 minutes

In trying to find the cause I found in the origin-node logs the following
about every second

Jan 31 13:10:18 node01-2018.drivenow.com.au origin-node[39574]: I0131
13:10:18.042221   39574 logs.go:41] skydns: failure to forward request
"read udp 10.118.56.32:53613->10.118.56.32:53: i/o timeout"

However I can successfully do dns resolution on the node as per

dig google.com @10.118.56.32 +short +search

216.58.200.110

And resolve internal IPs too

dig docker-registry.default.svc @10.118.56.32 +short +search

172.30.99.40

Am I correct in my testing that origin-node is accessing the dnsmasq server
on 10.118.56.32 ?

I get the same issue on all three nodes.

I can successfully run "oc adm diagnostics NetworkCheck" without any errors

Any thoughts on where else to look ?

Cameron
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.6 cluster no longer deploying deployemnts

2018-12-02 Thread Cameron Braid
I just tried restarting the controller service a couple more times and
those panics dont show anymore and the deployment has started.



On Mon, 3 Dec 2018 at 10:39 Cameron Braid  wrote:

>
> Yeah, these look like errors :
>
> *Dec 03 10:28:44 node01-2018.drivenow.com.au
> <http://node01-2018.drivenow.com.au> origin-master-controllers[89591]:
> E1203 10:28:44.150240   89591 runtime.go:66] Observed a panic: "invalid
> memory address or nil pointer dereference" (runtime error: invalid memory
> address or nil pointer dereference)*
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /usr/lib/golang/src/runtime/asm_amd64.s:514
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: /usr/lib/golang/src/runtime/panic.go:489
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: /usr/lib/golang/src/runtime/panic.go:63
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /usr/lib/golang/src/runtime/signal_unix.go:290
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/daemon/daemoncontroller.go:155
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/client-go/tools/cache/controller.go:192
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: :57
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/client-go/tools/cache/shared_informer.go:547
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /usr/lib/golang/src/runtime/asm_amd64.s:2197
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: I1203 10:28:44.230853   89591
> controller_utils.go:1032] Caches are synced for RC controller
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: W1203 10:28:44.300741   89591
> shared_informer.go:298] resyncPeriod 300 is smaller than
> resyncCheckPeriod 6000 and the informer has already started.
> Changing it to 6000
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: I1203 10:28:44.300811   89591
> controllermanager.go:466] Started "statefulset"
> *Dec 03 10:28:44 node01-2018.drivenow.com.au
> <http://node01-2018.drivenow.com.au> origin-master-controllers[89591]:
> E1203 10:28:44.300820   89591 runtime.go:66] Observed a panic: "invalid
> memory address or nil pointer dereference" (runtime error: invalid memory
> address or nil pointer dereference)*
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /builddir/build/BUILD/origin-3.7.0/_output/local/go/src/
> github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]:
> /usr/lib/golang/src/runtime/asm_amd64.s:514
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: /usr/lib/golang/src/runtime/panic.go:489
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591]: /usr/lib/golang/src/runtime/panic.go:63
> Dec 03 10:28:44 node01-2018.drivenow.com.au
> origin-master-controllers[89591

Re: Origin 3.6 cluster no longer deploying deployemnts

2018-12-02 Thread Cameron Braid
Dec 03 10:28:44 node01-2018.drivenow.com.au
origin-master-controllers[89591]: W1203 10:28:44.943364   89591
nodecontroller.go:877] Missing timestamp for Node node02-2018. Assuming now
as a timestamp.
Dec 03 10:28:44 node01-2018.drivenow.com.au
origin-master-controllers[89591]: I1203 10:28:44.943394   89591
nodecontroller.go:793] NodeController detected that zone  is now in state
Normal.
Dec 03 10:28:44 node01-2018.drivenow.com.au
origin-master-controllers[89591]: I1203 10:28:44.950122   89591
controller_utils.go:1032] Caches are synced for daemon sets controller

Cameron

On Mon, 3 Dec 2018 at 02:48 Clayton Coleman  wrote:

> Are there errors in the controller logs?
>
> On Dec 2, 2018, at 2:42 AM, Cameron Braid  wrote:
>
> Sorry, a typo - its a 3.7 cluster not 3.6
>
>  ~> oc version
>
> oc v3.7.2+282e43f
>
> kubernetes v1.7.6+a08f5eeb62
>
> features: Basic-Auth
>
>
> Server
>
> openshift v3.7.0+7ed6862
>
> kubernetes v1.7.6+a08f5eeb62
>
> On Sun, 2 Dec 2018 at 18:39 Cameron Braid  wrote:
>
>> I have a strange issue in my 3.6 cluster.  I create a extensions/v1beta1 
>> Deployment
>> and nothing happens.  No pods are created.  oc describe shows
>>
>>  oc -n drivenow-staging-x describe deployment strimzi-cluster-operator
>> Name: strimzi-cluster-operator
>> Namespace: drivenow-staging-x
>> CreationTimestamp: Sun, 02 Dec 2018 01:33:10 +1100
>> Labels: app=strimzi
>> Annotations: 
>> Selector: name=strimzi-cluster-operator
>> *Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable*
>> StrategyType: Recreate
>> MinReadySeconds: 0
>> Pod Template:
>>   Labels: name=strimzi-cluster-operator
>>   Service Account: strimzi-cluster-operator
>>   Containers:
>>strimzi-cluster-operator:
>> Image: strimzi/cluster-operator:0.6.0
>> Port: 
>> Limits:
>>   cpu: 1
>>   memory: 256Mi
>> Requests:
>>   cpu: 200m
>>   memory: 256Mi
>> Liveness: http-get http://:8080/healthy delay=10s timeout=1s
>> period=30s #success=1 #failure=3
>> Readiness: http-get http://:8080/ready delay=10s timeout=1s
>> period=30s #success=1 #failure=3
>> Environment: ...
>> Mounts: 
>>   Volumes: 
>> OldReplicaSets: 
>> NewReplicaSet: 
>> Events: 
>>
>> All nodes are ready, schedulable and there doesnt appear to be antyhing
>> in the logs.
>>
>> I've tried restarting origin-node, origin-master-api and
>> origin-master-controllers on all nodes and that has no impact.
>>
>> I'm out of ideas.
>>
>> Cameron
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.6 cluster no longer deploying deployemnts

2018-12-01 Thread Cameron Braid
Sorry, a typo - its a 3.7 cluster not 3.6

 ~> oc version

oc v3.7.2+282e43f

kubernetes v1.7.6+a08f5eeb62

features: Basic-Auth


Server

openshift v3.7.0+7ed6862

kubernetes v1.7.6+a08f5eeb62

On Sun, 2 Dec 2018 at 18:39 Cameron Braid  wrote:

> I have a strange issue in my 3.6 cluster.  I create a extensions/v1beta1 
> Deployment
> and nothing happens.  No pods are created.  oc describe shows
>
>  oc -n drivenow-staging-x describe deployment strimzi-cluster-operator
> Name: strimzi-cluster-operator
> Namespace: drivenow-staging-x
> CreationTimestamp: Sun, 02 Dec 2018 01:33:10 +1100
> Labels: app=strimzi
> Annotations: 
> Selector: name=strimzi-cluster-operator
> *Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable*
> StrategyType: Recreate
> MinReadySeconds: 0
> Pod Template:
>   Labels: name=strimzi-cluster-operator
>   Service Account: strimzi-cluster-operator
>   Containers:
>strimzi-cluster-operator:
> Image: strimzi/cluster-operator:0.6.0
> Port: 
> Limits:
>   cpu: 1
>   memory: 256Mi
> Requests:
>   cpu: 200m
>   memory: 256Mi
> Liveness: http-get http://:8080/healthy delay=10s timeout=1s
> period=30s #success=1 #failure=3
> Readiness: http-get http://:8080/ready delay=10s timeout=1s
> period=30s #success=1 #failure=3
> Environment: ...
> Mounts: 
>   Volumes: 
> OldReplicaSets: 
> NewReplicaSet: 
> Events: 
>
> All nodes are ready, schedulable and there doesnt appear to be antyhing in
> the logs.
>
> I've tried restarting origin-node, origin-master-api and
> origin-master-controllers on all nodes and that has no impact.
>
> I'm out of ideas.
>
> Cameron
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Origin 3.6 cluster no longer deploying deployemnts

2018-12-01 Thread Cameron Braid
I have a strange issue in my 3.6 cluster.  I create a
extensions/v1beta1 Deployment
and nothing happens.  No pods are created.  oc describe shows

 oc -n drivenow-staging-x describe deployment strimzi-cluster-operator
Name: strimzi-cluster-operator
Namespace: drivenow-staging-x
CreationTimestamp: Sun, 02 Dec 2018 01:33:10 +1100
Labels: app=strimzi
Annotations: 
Selector: name=strimzi-cluster-operator
*Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable*
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
  Labels: name=strimzi-cluster-operator
  Service Account: strimzi-cluster-operator
  Containers:
   strimzi-cluster-operator:
Image: strimzi/cluster-operator:0.6.0
Port: 
Limits:
  cpu: 1
  memory: 256Mi
Requests:
  cpu: 200m
  memory: 256Mi
Liveness: http-get http://:8080/healthy delay=10s timeout=1s period=30s
#success=1 #failure=3
Readiness: http-get http://:8080/ready delay=10s timeout=1s period=30s
#success=1 #failure=3
Environment: ...
Mounts: 
  Volumes: 
OldReplicaSets: 
NewReplicaSet: 
Events: 

All nodes are ready, schedulable and there doesnt appear to be antyhing in
the logs.

I've tried restarting origin-node, origin-master-api and
origin-master-controllers on all nodes and that has no impact.

I'm out of ideas.

Cameron
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


ip failover Unable to access script `

2018-05-11 Thread Cameron Braid
I have setup ipfailover and it seems to be working however in the pod logs
I get the following log message :

*Unable to access script `http://10.118.56.40/8443>*

I'm not sure the propper way to test this script is however the following
command seems to indicate that it works:

trying an open port :

*sh-4.2# echo > /dev/tcp/10.118.56.32/8443 *
*sh-4.2# echo $?*
*0*

trying a closed port :

*sh-4.2# echo > /dev/tcp/10.118.56.32/8441 *
*sh: connect: Connection refused*
*sh: /dev/tcp/10.118.56.32/8441 : Connection
refused*
*sh-4.2# echo $?*
*1*

So can I ignore that "unable to access script" log ?  Is keepalived trying
to stat the script or something ?

Cheers

Cameron
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: DNS resolving problem - in pod

2017-10-19 Thread Cameron Braid
I had that happen quite a bit within containers based on alpine linux

Cam

On Thu, 19 Oct 2017 at 23:49 Łukasz Strzelec 
wrote:

> Dear all :)
>
> I have following problem:
>
> [image: Obraz w treści 1]
>
>
> Frequently I have to restart origin-node to solve this issue, but I can't
> find  the root cause of it.
> Does anybody has got any idea ? Where to start looking ?
> In addition , this problem is affecting different cluster nodes - randomly
> diffrent pods have got this issues.
>
>
> Best regards
> --
> Ł.S.
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


make build-rpms generates rpms with version 0.0.1

2016-12-28 Thread Cameron Braid
I wish to build rpms for the release-1.4 branch

I have checked out the branch and ran "make build-rpms" and I get the
following (abridged) output :

BUILD_TESTS= OS_ONLY_BUILD_PLATFORMS='linux/amd64' hack/build-rpm-release.sh
[INFO] Building Origin release RPMs with tito...
Building package [v0.0.1]
Wrote: /tmp/openshift/build-rpm-release/tito/origin-git-15116.64e5417.tar.gz
OS_GIT_COMMIT::407c344
OS_GIT_VERSION::v1.4.0-rc1+407c344-61
OS_GIT_MAJOR::1
OS_GIT_MINOR::4+
...
++ Placing binaries
++ Creating
openshift-origin-client-tools-v1.4.0-rc1+407c344-61-linux-64bit.tar.gz
...
Wrote:
/tmp/openshift/build-rpm-release/tito/origin-0.0.1-0.git.15116.64e5417.el7.centos.src.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/origin-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/origin-master-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/origin-node-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/tuned-profiles-origin-node-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/origin-clients-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/origin-dockerregistry-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/origin-pod-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/x86_64/origin-sdn-ovs-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/noarch/origin-excluder-0.0.1-0.git.15116.64e5417.el7.centos.noarch.rpm
Wrote:
/tmp/openshift/build-rpm-release/tito/noarch/origin-docker-excluder-0.0.1-0.git.15116.64e5417.el7.centos.noarch.rpm

As you can see the release build is working, generating the tgz with the
correct version, however the rpms use the 0.0.1

# rpm -qip
/tmp/openshift/build-rpm-release/tito/x86_64/origin-0.0.1-0.git.15116.64e5417.el7.centos.x86_64.rpm
Name: origin
Version : 0.0.1
Release : 0.git.15116.64e5417.el7.centos

The only way I can force the version is to edit the origin.spec file,
change the Version line, commit, then run make build-rpms

Am I following the correct procedure ?

Cheers

Cameron
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Push to registry sometimes fails

2016-12-07 Thread Cameron Braid
Yeah, I secured the registry, however I couldn't get pushing to work when
using the tls certificates.. I kept getting "Error: x509: certificate
signed by unknown authority" when using the master's ca.crt coppied into
/etc/docker/certs.d/172.30.25.196:5000/ca.crt  I tried going throgh the
secure your registry steps three times, and I can't get it to work.  I
could cutl --cacert=/etc/docker/certs.d/172.30.25.196:5000/ca.crt
https://172.30.25.196:5000/v2/ just fine, but docker still didn't like it.

Adding  "--insecure-registry 172.30.25.196:5000" was a workaround that
works mostly - it is still flaky when pushing from a build.

I'd really like to get a secure registry working so any thoughts ?

Cameron

On Thu, 8 Dec 2016 at 12:26 Andy Goldstein <agold...@redhat.com> wrote:

Docker assumes that the registry talks TLS. It will only use http if you
specify the registry is insecure (typically via '--insecure-registry
172.30.0.0/16' in /etc/sysconfig/docker).

Is your registry secured?

On Wed, Dec 7, 2016 at 8:11 PM, Cameron Braid <came...@braid.com.au> wrote:

I am occasional getting this error after a build when pushing to the
internal registry :

Pushed 10/12 layers, 83% complete
Registry server Address:
Registry server User Name: serviceaccount
Registry server Email: serviceacco...@example.org
Registry server Password: <>
error: build error: Failed to push image: Get http://172.30.25.196:5000/v2/:
malformed HTTP response "\x15\x03\x01\x00\x02\x02"

It looks like the pusher is using http to talk to the https registry.

What tells the pusher that the registry is TLS ?

Cheers

Cameron

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Push to registry sometimes fails

2016-12-07 Thread Cameron Braid
I am occasional getting this error after a build when pushing to the
internal registry :

Pushed 10/12 layers, 83% complete
Registry server Address:
Registry server User Name: serviceaccount
Registry server Email: serviceacco...@example.org
Registry server Password: <>
error: build error: Failed to push image: Get http://172.30.25.196:5000/v2/:
malformed HTTP response "\x15\x03\x01\x00\x02\x02"

It looks like the pusher is using http to talk to the https registry.

What tells the pusher that the registry is TLS ?

Cheers

Cameron
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: few basic questions about S2I and docker run

2016-09-05 Thread Cameron Braid
Sorry to hijack your thread, but where is the "git repo volume example" ?

In origin gitI can see the gitserver (
https://github.com/openshift/origin/tree/master/examples/gitserver) but it
uses wither ephemeral or pvc.

Cheers

Cameron

On Tue, 6 Sep 2016 at 12:34 Ben Parees  wrote:

> On Fri, Sep 2, 2016 at 4:27 PM, Ravi  wrote:
>
>>
>> Ben, thanks for pointing me in right direction. However, after a week, I
>> am still struggling and need help.
>>
>> The questions you raised are genuine issues which, if managed by
>> openshift will be easy to handle, however if openshift does not manage
>> them, then manually managing them is certainly a difficult task.
>>
>> Leaving that aside, I have been struggling with running my app on
>> openshift. Here is a list of everything I tried
>>
>> As suggested by you, I tried to create a volume and run java docker with
>> it. I am getting really lost in variety of issues, here are some:
>>
>
> ​I still think you'll get more mileage by trying to use the system as it
> was designed to be used(build an image with your compiled source built in)
> instead of trying to force a different workflow onto it.
> ​
>
>
>>
>> - unless I login with service:admin user (no password), I am not
>> authorized to mount a volume.
>>
>
> ​what type of volume?  what do you mean by "mount a volume"?  what
> commands are you running?​  how is your pod or deployment config defined?
>
>
>
>> - I can only login with service:admin on command line, the UI gives me
>> error. So basically I cannot visually see mounted volumes
>> - There is no way from UI to create a Volume Claim, I must define a JSON
>>
> - I was unable to find any documentation for this JSON and had to copy
>> from other places
>>
>
> ​​you can use "oc set volumes" to add volume claims to a deployment
> config, once you have (as an administrator) defined persistent volumes in
> your cluster.
>
> you can also "attach storage" to a deployment config from within the
> openshift console, but that does not apply to your scenario since you are
> trying to mount a "specific" volume into your pod instead of just
> requesting persistent storage.
>
>
>
>
>> - After all this, how do I know which volume is being attached to which
>> volume claim?
>>
>
> ​you aren't supposed to care.  You ask for persistent storage, the system
> finds persistent storage to meet those needs, and you use it.
>
> If you're trying to set up a specific persistent volume definition with
> existing content, and then ensure that particular PV gets assigned to your
> Pod then you don't use a PVC, you just reference the volume directly in the
> Pod definition as with the git repo volume example.
>
>
>
>> - I copied mongodb.json and switched image to java.json, this did not work
>> - I decided, this was too complex, lets just do S2I. However, when I
>> cannot find any documentation how to do it. The example images work but
>> when i try my own node or JEE project, S2I fails. I am guessing it needs
>> some specific files in source to do this.
>> - While PHP project https://github.com/gshipley/simplephp works with S2I
>> with only a php file, when I create a nodejs file, it does not work. I
>> could not find documentation on how to get my node file to run.
>>
>
> ​https://github.com/openshift/nodejs-ex
> https://docs.openshift.org/latest/using_images/s2i_images/nodejs.html
> ​
>
>
>> - I tried to do walkthroughs, but most of them are using openshift online
>> and a command "rhc" that is not available to me.
>>
>
> ​i'm not sure what walkthroughs you found, but "rhc" is a command like
> tool for the previous version of openshift, v2.  So that is irrelevant to
> what you're trying to do.  The v3 online environment is here:
>
> https://console.preview.openshift.com/console/
>
> and you can find a tutorial here:
> https://github.com/openshift/origin/tree/master/examples/sample-app
> (if you already have an openshift cluster, you can start at step 7,
> "Create a new project in OpenShift. "
> ​
>
>
>>
>> And all I wanted to do was run one simple command:
>>
>> docker run --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
>> openjdk:8-jre-alpine java myClass
>>
>> ARGGG!! HELP please.
>>
>>
>>
>> On 8/26/2016 3:24 PM, Ben Parees wrote:
>>
>>>
>>>
>>> On Fri, Aug 26, 2016 at 6:10 PM, Ravi Kapoor >> > wrote:
>>>
>>>
>>> Ben,
>>>
>>> Thank you so much for taking the time to explain. This is very
>>> helpful.
>>> If I may, I have a few followup questions:
>>>
>>> > ​That is not a great approach to running code.  It's fine for
>>> development, but you really want to be producing immutable images that a
>>> developer can hand to QE has tested it, they can hand that exact same image
>>> to prod, and there's no risk that pieces have changed.
>>>
>>> Q1: It seems like Lyft uses the approach I was mentioning i.e.
>>> inject code into dockers rather than copy