Re: how to set no_proxy for s2i registry push?

2016-09-19 Thread Lionel Orellana
Adding the registry's address to NO_PROXY in /etc/sysconfig/docker works.

However it doesn't feel like something I should be changing by hand. Is
this something I missed in the ansible configuration when installing?

On 20 September 2016 at 14:09, Lionel Orellana  wrote:

> Hi
>
> I'm getting this error when building the wildfly:10  builder sample on a
> containerised v1.3.0.
>
> ushing image 172.19.38.253:5000/bimorl/wildfly:latest ...
> Registry server Address:
> Registry server User Name: serviceaccount
> Registry server Email: serviceacco...@example.org
> Registry server Password: <>
> error: build error: Failed to push image: Error: Status 503 trying to push
> repository bimorl/wildfly:
> ERROR: The requested URL could not be retrieved
> Connection to 172.19.38.253 failed.\n\n\n id=\"sysmsg\">The system returned: (145) Connection timed
> out\n\nThe remote host or network may be down. Please try the
> request again.
>
> curl fails with the same error but succeeds if I use --no-proxy
>
> I set Environment="NO_PROXY=172.19.38.253" in 
> /etc/systemd/system/docker.service.d/http-proxy.conf
> on all hosts to no avail.
>
> Where should I set no_proxy for this?
>
> Thanks
>
> Lionel.
>
>
>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


how to set no_proxy for s2i registry push?

2016-09-19 Thread Lionel Orellana
Hi

I'm getting this error when building the wildfly:10  builder sample on a
containerised v1.3.0.

ushing image 172.19.38.253:5000/bimorl/wildfly:latest ...
Registry server Address:
Registry server User Name: serviceaccount
Registry server Email: serviceacco...@example.org
Registry server Password: <>
error: build error: Failed to push image: Error: Status 503 trying to push
repository bimorl/wildfly:
ERROR: The requested URL could not be retrieved
Connection to 172.19.38.253 failed.\n\n\nThe system returned: (145) Connection timed
out\n\nThe remote host or network may be down. Please try the
request again.

curl fails with the same error but succeeds if I use --no-proxy

I set Environment="NO_PROXY=172.19.38.253"
in /etc/systemd/system/docker.service.d/http-proxy.conf on all hosts to no
avail.

Where should I set no_proxy for this?

Thanks

Lionel.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can't run containers after advanced installation

2016-09-19 Thread Lionel Orellana
Adding -b=lbr0 --mtu=1450 to the docker daemon options
in /etc/systemd/system/docker.service.d/startup.conf seems to fix it.

The host already had docker installed so perhaps the installation didn't
bother passing or somehow setting those options from
/run/openshift-sdn/docker-network.



On 16 September 2016 at 17:57, Lionel Orellana  wrote:

> Hi All
>
> I installed Origin v1.3.0-rc1 (unaware of the imminent 1.3 release) using
> the ansible method.
> Everything seemed to have installed Ok. But whenever I try to build or run
> anything I get this type of error:
>
> Error syncing pod, skipping: failed to "StartContainer" for "POD" with
> RunContainerError: "runContainer: Error response from daemon: failed to
> create endpoint k8s_POD.4a82dc9f_nodejs-example-2-build_bimorl_
> 7194e4a4-7bcd-11e6-ab95-005056915814_cd78594e on network bridge: adding
> interface veth7932aba to bridge docker0 failed: could not find bridge
> docker0: route ip+net: no such network interface"
>
> On the host running a simple docker image fails too.
>
> bash-4.2$ sudo docker run hello-world
> docker: Error response from daemon: failed to create endpoint
> ecstatic_ramanujan on network bridge: adding interface veth59decae to
> bridge docker0 failed: could not find bridge docker0: route ip+net: no such
> network interface.
>
> I have restarted the docker service several times with no effect. docker0
> appears briefly when i do netstat -nr but then disappears again.
>
> docker version 1.10.3
> RHEL 7.2.
>
> Thanks
>
> Lionel.
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pod does not have Scale up/down buttons

2016-09-19 Thread Ravi Kapoor
Once more, now with JSON

{
"kind": "List",
"apiVersion": "v1beta3",
"metadata": {},
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"labels": {
"name": "node-test"
},
"name": "node-test"
},
"spec": {
"containers": [
{
"image": "node:4.4.7",
"imagePullPolicy": "IfNotPresent",
"name": "node-test",
"command": [
"node"
],
"args": [
"/usr/src/app/server.js"
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/usr/src/app",
"name": "myclaim2"
}
],
"securityContext": {
"capabilities": {},
"privileged": false
},
"terminationMessagePath": "/dev/termination-log"
}
],
"volumes": [
{
"name": "myclaim2",
"persistentVolumeClaim": {
"claimName": "myclaim2"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"serviceAccount": ""
},
"status": {}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": null,
"name": "node-service"
},
"spec": {
"portalIP": "",
"ports": [
{
"name": "web",
"port": 8080,
"protocol": "TCP"
}
],
"selector": {
"name": "node-test"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"annotations": {},
"name": "node-route"
},
"spec": {
"to": {
"name": "node-service"
}
}
}
]
}

On Mon, Sep 19, 2016 at 2:19 PM, Ravi Kapoor 
wrote:

>
> I created following job definition. It successfully creates a service, pod
> and a route. I am able to access the website.
>
> It shows 1 Pod running, however, there are no scale up/down buttons in the
> UI.
> How can I scale this application up?
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Version of the artifacts supported / OpenShift Release

2016-09-19 Thread Charles Moulliard
The Swagger GUI is available at this address :
https://openshift-server:8443/swaggerapi

https://www.dropbox.com/s/e0jeow7zoj70oyy/Screenshot%202016-09-19%2019.30.14.png?dl=0

On Mon, Sep 19, 2016 at 7:13 PM, Charles Moulliard 
wrote:

> Is swagger packaged with OpenShift Origin to list the operations, ... ?
>
> On Mon, Sep 19, 2016 at 6:45 PM, Clayton Coleman 
> wrote:
>
>> That is generally the swagger docs (1.2 currently) listed here:
>> https://github.com/openshift/origin/tree/master/api/swagger-spec
>>
>> On Mon, Sep 19, 2016 at 8:37 AM, Charles Moulliard 
>> wrote:
>>
>>> Hi,
>>>
>>> Is it defined somewhere for each OpenShift Artifacts (Template,
>>> DeploymentConfig, Buildconfig, ...), the version of the "syntax" supported
>>> according to the OpenShift Server (1.2, 1.3, ...) where it will be executed
>>> ?
>>> Syntax = Json or YAML Structure supported
>>>
>>> Example: The DeploymentConfig Template is described here https://
>>> docs.openshift.com/enterprise/latest/dev_guide/deplo
>>> yments.html#dev-guide-deployments for the Api Version = 1
>>> - Is Api  Version 1 the Api supported by OpenShift Origin 1.x ?
>>> OpenShift enterprise 3.x ?
>>> - Is it possible to have a Reference Template for each Artifact that we
>>> can use top of OpenShift ? Maybe from a Github repo ?
>>>
>>> Regards,
>>>
>>> Charles
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Version of the artifacts supported / OpenShift Release

2016-09-19 Thread Charles Moulliard
Is swagger packaged with OpenShift Origin to list the operations, ... ?

On Mon, Sep 19, 2016 at 6:45 PM, Clayton Coleman 
wrote:

> That is generally the swagger docs (1.2 currently) listed here:
> https://github.com/openshift/origin/tree/master/api/swagger-spec
>
> On Mon, Sep 19, 2016 at 8:37 AM, Charles Moulliard 
> wrote:
>
>> Hi,
>>
>> Is it defined somewhere for each OpenShift Artifacts (Template,
>> DeploymentConfig, Buildconfig, ...), the version of the "syntax" supported
>> according to the OpenShift Server (1.2, 1.3, ...) where it will be executed
>> ?
>> Syntax = Json or YAML Structure supported
>>
>> Example: The DeploymentConfig Template is described here https://
>> docs.openshift.com/enterprise/latest/dev_guide/deplo
>> yments.html#dev-guide-deployments for the Api Version = 1
>> - Is Api  Version 1 the Api supported by OpenShift Origin 1.x ? OpenShift
>> enterprise 3.x ?
>> - Is it possible to have a Reference Template for each Artifact that we
>> can use top of OpenShift ? Maybe from a Github repo ?
>>
>> Regards,
>>
>> Charles
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Kibana Visualization Sharing

2016-09-19 Thread Jeff Cantrill
This is a known issue captured
https://trello.com/c/RLJbg6KX/385-share-kibana-user-dashboard and the
referenced BZ.

Bottom line is to achieve multi-tenancy using the Kibana version provided
by the EFK stack, each user essentially has a 'profile' where there
dashboards and visualizations are stored.  Their currently is no easy
mechanism in the version we provide that would allow you to achieve
sharing.

On Sun, Sep 18, 2016 at 9:20 PM, Frank Liauw  wrote:

> Hi All,
>
> I am using Openshift Origin's aggregated logging stack.
>
> However, my visualizations are not shared amongst users; even via the
> direct share link:
>
> http://i.stack.imgur.com/PoNvl.png
>
> Viewers (who are already logged in) get the following error: Could not
> locate that visualization (id: response-codes); they have access to the
> indices / namespaces on which the visualization was built upon.
>
> I've experience with vanilla kibanas, and visualizations were shared by
> default.
>
> Thanks!
>
> Frank
> Systems Engineer
>
> VSee: fr...@vsee.com  | Cell: +65 9338 0035
>
> Join me on VSee for Free 
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Version of the artifacts supported / OpenShift Release

2016-09-19 Thread Clayton Coleman
That is generally the swagger docs (1.2 currently) listed here:
https://github.com/openshift/origin/tree/master/api/swagger-spec

On Mon, Sep 19, 2016 at 8:37 AM, Charles Moulliard 
wrote:

> Hi,
>
> Is it defined somewhere for each OpenShift Artifacts (Template,
> DeploymentConfig, Buildconfig, ...), the version of the "syntax" supported
> according to the OpenShift Server (1.2, 1.3, ...) where it will be executed
> ?
> Syntax = Json or YAML Structure supported
>
> Example: The DeploymentConfig Template is described here https://
> docs.openshift.com/enterprise/latest/dev_guide/deployments.html#dev-guide-
> deployments for the Api Version = 1
> - Is Api  Version 1 the Api supported by OpenShift Origin 1.x ? OpenShift
> enterprise 3.x ?
> - Is it possible to have a Reference Template for each Artifact that we
> can use top of OpenShift ? Maybe from a Github repo ?
>
> Regards,
>
> Charles
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deleting a pod sometimes deletes the docker container and sometimes it doesn't

2016-09-19 Thread Andy Goldstein
When you delete a pod, its containers should **always** be deleted. If they
are not, this is a bug.

Could you please elaborate what use case(s) you have for keeping the
containers around?

Thanks,
Andy

On Mon, Sep 19, 2016 at 5:09 AM, v  wrote:

> Hello,
>
> we have an issue with docker/openshfit.
> With Openshift 1.1.4 and Docker 1.8.2 we can delete a pod via "oc delete
> po" and the docker container is only stopped but not deleted. That means
> the container is still visible via "docker ps -a",
> /var/lib/docker/containers/[hash] still persits, "docker logs" still
> works etc.
>
> With Openshift 1.1.6 and Docker 1.9.1 the docker container is sometimes
> DELETED when we delete a pod via "oc delete po" and sometimes it is just
> stopped like with Openshift 1.1.4/Docker 1.8.2.
>
> Is there any way we can influence this? Ideally we would like "oc delete
> po" to just stop the pod but not delete it.
>
> Regards
> v
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Why can't we delete a project with oc delete project

2016-09-19 Thread Charles Moulliard
Thx fo the trick Jordan. My problem is solved.

On Mon, Sep 19, 2016 at 2:36 PM, Jordan Liggitt  wrote:

> The project already exists (so you get the "already exists" error), and
> the dev user must not have permissions in it.
>
> You'll need to delete it with a more highly privileged user like
> system:admin
>
> On Sep 19, 2016, at 7:37 AM, Charles Moulliard 
> wrote:
>
> Hi,
>
> When I use these openshift oc client commands in a bash shell script, then
> I get this error
>
> ## Log on with the dev user, create the vertx project
> Error from server: project "vertx-demo" already exists
> ## Add policies
> Error from server: User "vertx-dev" cannot get policybindings in project
> "vertx-demo"
> Error from server: User "vertx-dev" cannot get policybindings in project
> "vertx-demo"
>
> Why can't we delete an existing project ? Is there a workaround ?
>
> #!/usr/bin/env bash
>> oc logout
>> oc login -u admin -p admin
>> oc delete project vertx-demo
>> oc login -u vertx-dev -p devel
>> echo "## Log on with the dev user, create the vertx project"
>> oc new-project vertx-demo
>> echo "## Add policies"
>> oc policy add-role-to-user view vertx-dev -n vertx-demo
>> oc policy add-role-to-group view system:serviceaccounts -n vertx-demo
>
>
> Regards,
>
> Charles
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: CI/CD Pipeline & DSL

2016-09-19 Thread Charles Moulliard
Hi,

I'm confused about this section of the doc : https://github.com/openshift
/jenkins-plugin#jenkins-pipeline-formerly-workflow-plugin

- Is Groovy DSL syntax still supported with Openshift Jenkins Pipeline ?

Additional questions :

- Is it the list of the jenkins plugins that we package with the OpenShift
Docker Image -  https://github.com/openshift/jenkins/blob/master/1/contrib/
openshift/base-plugins.txt ?
- Can you confirm that the Openshift Jenkins Docker image always package
the latest Jenkins Release (= LTS) ?
- Is it possible to trigger a Jenkins job (= Pipeline) when a new commit
has been pushed into a git project ?

Regards,

Charles

On Fri, Sep 16, 2016 at 7:52 PM, Ben Parees  wrote:

> In the meantime, you can certainly install the fabric8 plugin into our
> jenkins image and use those additional pipeline steps. The openshift
> pipeline feature works with any valid jenkinsfile.  We just happen to
> include the openshift specific plugin/dsl support in the image we ship.
>
> Ben Parees | OpenShift
>
> On Sep 16, 2016 19:08, "Clayton Coleman"  wrote:
>
>> It's been discussed.  The primary concern would that would become a
>> versionable API once we ship it, so we have to have a release and
>> versioning process that maintains backwards compatibility for users for the
>> life of the product.  That part is what we haven't sorted out yet.
>>
>> On Fri, Sep 16, 2016 at 9:23 AM, Charles Moulliard 
>> wrote:
>>
>>> Hi,
>>>
>>> As far as I know Openshift and Fabric8 projects have developed DSL to
>>> define Jenkins Pipelines running top of OpenShift/Kubernetes
>>>
>>> - OpenShift :  https://github.com/openshift/
>>> jenkins-plugin#jenkins-pipeline-formerly-workflow-plugin
>>> - Fabric8 : https://github.com/fabric8io/jenkins-pipeline-library
>>>
>>> Is it planned that (some) Fabric8 DSL tasks will be integrated in a
>>> future version of OpenShift ?
>>>
>>> Regards,
>>>
>>> Charles
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Deleting a pod sometimes deletes the docker container and sometimes it doesn't

2016-09-19 Thread v

Hello,

we have an issue with docker/openshfit.
With Openshift 1.1.4 and Docker 1.8.2 we can delete a pod via "oc delete po" and the docker 
container is only stopped but not deleted. That means the container is still visible via "docker ps 
-a", /var/lib/docker/containers/[hash] still persits, "docker logs" still works etc.

With Openshift 1.1.6 and Docker 1.9.1 the docker container is sometimes DELETED when we 
delete a pod via "oc delete po" and sometimes it is just stopped like with 
Openshift 1.1.4/Docker 1.8.2.

Is there any way we can influence this? Ideally we would like "oc delete po" to 
just stop the pod but not delete it.

Regards
v


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: s2i-base ancestry

2016-09-19 Thread Ben Parees
On Sun, Sep 18, 2016 at 5:12 AM, Dale Bewley  wrote:

>
>
> - On Aug 28, 2016, at 9:28 PM, Ben Parees  wrote:
>
> https://github.com/sclorg/rhscl-dockerfiles/tree/master/rhel7.s2i-base is
> the future, so you should go off that, which as you noted generates
> registry.access.redhat.com/rhscl/s2i-base-rhel7
>
> Some existing s2i rhel images are still based on
> https://github.com/openshift/s2i-base however.
>
>
> Thanks for the explanation.
>
> I see https://github.com/sclorg/rhscl-dockerfiles/blob/master/
> rhel7.s2i-base/Dockerfile has `FROM rhel:7.2-released`
>
> I do not see a  `7.2-released` tag on the rhel image, but I assume it is
> the same image as `rhel:7.2` but on an internal registry perhaps?
>

​yes it should be equivalent, you'll notice some of the other rhel7 images
use the rhel7.2 image repo directly as well:
https://github.com/sclorg/rhscl-dockerfiles/blob/master/centos7.mysql55/Dockerfile.rhel7
​
you should be ok using either rhel:7.2 or rhel7.2, they appear to be
identical:

registry.access.redhat.com/rhel7.2
latest  98a88a8b722a12 days ago 201.4 MB
registry.access.redhat.com/rhel
7.2 98a88a8b722a12 days ago 201.4 MB



> $ curl -s https://registry.access.redhat.com/v1/repositories/rhel/tags |
> jq .
> {
>   "6.7": "0701b067a2960e22357a373f8d53ac2ad12547e2c19d49750b9af4401167
> b70d",
>   "6.7-35": "fb7b495fd705c9e9bbbd17b893f9937957c6d6d94ea311d5494658dff86b
> 4ac2",
>   "6.7-51": "0701b067a2960e22357a373f8d53ac2ad12547e2c19d49750b9af4401167
> b70d",
>   "6.8": "59d5a49b0f75c6f78a3673368cb76976ec89a5f020026f43fc99572d48ea
> 936b",
>   "6.8-101": "59d5a49b0f75c6f78a3673368cb76976ec89a5f020026f43fc99572d48ea
> 936b",
>   "7.0-21": "e1f5733f050b2488a17b7630cb038bfbea8b7bdfa9bdfb99e63a33117e28
> d02f",
>   "7.0-23": "bef54b8f8a2fdd221734f1da404d4c0a7d07ee9169b1443a338ab54236c8
> c91a",
>   "7.0-27": "8e6704f39a3d4a0c82ec7262ad683a9d1d9a281e3c1ebbb64c045b9af39b
> 3940",
>   "7.1-11": "d0a516b529ab1adda28429cae5985cab9db93bfd8d301b3a94d22299af72
> 914b",
>   "7.1-12": "275be1d3d0709a06ff1ae38d0d5402bc8f0eeac44812e5ec1df4a9e99214
> eb9a",
>   "7.1-16": "82ad5fa11820c2889c60f7f748d67aab04400700c581843db0d1e6873532
> 7443",
>   "7.1-24": "c4f590bbcbe329a77c00fea33a3a960063072041489012061ec3a134baba
> 50d6",
>   "7.1-4": "10acc31def5d6f249b548e01e8ffbaccfd61af0240c17315a7ad393d022c
> 5ca2",
>   "7.1-6": "65de4a13fc7cf28b4376e65efa31c5c3805e18da4eb01ad0c8b8801f4a10
> bc16",
>   "7.1-9": "e3c92c6cff3543d19d0c9a24c72cd3840f8ba3ee00357f997b786e8939ef
> ef2f",
>   "7.2": "aec95d6d92873ac6bc966d4fb6e97cd5a9e4da5e53e91478078557a7dc94
> eaaf",
>   "7.2-104": "aec95d6d92873ac6bc966d4fb6e97cd5a9e4da5e53e91478078557a7dc94
> eaaf",
>   "7.2-35": "6883d5422f4ec2810e1312c0e3e5a902142e2a8185cd3a1124b459a7c38d
> c55b",
>   "7.2-38": "6c3a84d798dc449313787502060b6d5b4694d7527d64a7c99ba199e3b2df
> 834e",
>   "7.2-43": "07e361a1cd7cae70df4c585c2dbeceb25e069a01911c15a5d70d499eacd0
> 53ce",
>   "7.2-44": "18c92348de3686dfc369b5acd799b0538b54072279a130f56688510f6e6f
> 9828",
>   "7.2-46": "bf63a676257aeb7a75a6bbbda138398bbaf223ab34fe9d169e458f9b3990
> 04ef",
>   "7.2-56": "95612a3264fcea256ed7c179d6e4a5dece55e217cff198bbaeb4a7e554f9
> 74ca",
>   "7.2-61": "c453594215e4370541ba0a2a238c9429026de1d1deedf5e5b7442778e428
> c60f",
>   "7.2-75": "885f7095eac8364d76d110f7a886974f12a4aeef7a03721bda49c8947f49
> 92ce",
>   "7.2-84": "6f7a31562d1ec723b2b025c8cf040fd6c0e74cb14fd0abdbd1a9b0dee5dd
> 19f6",
>   "latest": "aec95d6d92873ac6bc966d4fb6e97cd5a9e4da5e53e91478078557a7dc94
> eaaf"
> }
>
> Is it possible to see the Dockerfile for rhel:7.2-released?
>
>
>
> Note that there's no requirement your s2i image use that base image.  If
> you find it convenient, great, but its primary purpose is to serve as a
> common base for the existing suite of SCL-based s2i images, its use as a
> general base for s2i images is a secondary goal (meaning if it does not
> serve your purposes, requests for enhancement will be considered in the
> context of the primary goal).
>
> Understood. My goal is to customize as little as possible and make it as
> likely as possible that developers take advantage of your releases by way
> of imagechange triggers following my builder images which have imagechange
> triggers following your builder images. To accomplish this, I'm using
> buildconfigs inside OpenShift for my builder images instead of the s2i
> command line tool.
>

​sounds like the right approach.​



>
>
> On Mon, Aug 29, 2016 at 12:11 AM, Dale Bewley  wrote:
>
>>
>> Where does the OpenShift Enterprise s2i-base image come from?
>>
>> A. https://github.com/openshift/s2i-base or
>> B. https://github.com/sclorg/rhscl-dockerfiles/tree/master/rhel7.s2i-base
>>
>> Repo A builds FROM rhel7.2 and seems to create
>> registry.access.redhat.com/openshift3/sti-base
>> Repo B builds FROM rhel:7.2-released and seems to create
>>