Re: Keycloak as oauth provider

2016-09-20 Thread Brenton Leanhardt
On Tue, Sep 20, 2016 at 2:35 PM, Charles Moulliard  wrote:
> As OpenID is supported by Keycloak, I'm fine to use
> (https://docs.openshift.org/latest/install_config/configuring_authentication.html#OpenID)
> but that should be great to have an openshift distro packaging keycloak.

I might be misunderstanding what you mean by openshift distro
packaging keycloak, but I think this is what you want:

https://access.redhat.com/documentation/en/red-hat-xpaas/0/single/red-hat-xpaas-sso-image/

More documentation on Red Hat SSO (Keycloak based) can be found here:
https://access.redhat.com/documentation/en/red-hat-single-sign-on/

>
> I have started to create something about that
> https://github.com/cmoulliard/tech-knowledge/blob/master/openshift-v3/cmd-generate-config.md#start-openshift-using-the-master-config-file-modified
>
> On Tue, Sep 20, 2016 at 7:27 PM, Dan Mace  wrote:
>>
>>
>>
>> On Tue, Sep 20, 2016 at 1:23 PM, Charles Moulliard 
>> wrote:
>>>
>>> Hi,
>>>
>>> What is the status about the integration of Keycloak as Oauth provider
>>> with OpenShift Origin ? Is it done - not done ? Still planned ?
>>>
>>> Regards,
>>>
>>> Charles
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>> Which authentication protocol are you interested in? You can integrate
>> OpenShift with KeyCloak out of the box today using OpenID Connect. [1]
>>
>> [1]
>> https://docs.openshift.org/latest/install_config/configuring_authentication.html#OpenID
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: docker version doubt

2016-09-20 Thread Hiberus
Ummm weird

I am still puzzled then with the unsupported word

I will deploy my apps on that cluster tomorrow. Hope it works without problems 
:-/


> El 20 sept 2016, a las 22:29, Alex Wauck  escribió:
> 
> Oh, I didn't notice the "unsupported" part.  Mine says that, too, though.  
> Interestingly enough, I *don't* see it on my laptop or a Debian server here 
> at work.  On the Debian server, it's 1.12, and it comes straight from Docker 
> themselves.  On my laptop, it comes from the Arch Linux community package, 
> which compiles the docker binary instead of downloading a pre-built binary 
> from Docker.  So, I guess my initial theory that binaries that *don't* come 
> straight from Docker themselves have that "unsupported" bit is false.  I also 
> don't see it on my personal Debian server, where I installed Docker 1.6.2 
> from the Debian repository, so it's not phoning home and asking if it's still 
> supported.
> 
> So, no idea why it says that.  Sorry.
> 
>> On Tue, Sep 20, 2016 at 10:23 AM, Julio Saura  wrote:
>> nice to hear
>> 
>> is a weird version name although xD
>> 
>> thanks alex.
>> 
>> best regards
>> 
>>> El 20 sept 2016, a las 17:16, Alex Wauck  escribió:
>>> 
>>> I've seen the same thing myself.  It seems to cause some bad interactions 
>>> with image stream tags (i.e. sha256-based references result in pull 
>>> failures), but on the plus side, you can use all those images on Docker Hub 
>>> that were pushed with 1.10 or later.  On balance, I'd say it solves more 
>>> problems than it creates.  We're running our production OpenShift cluster 
>>> with 1.10, and it's worked out pretty well for us.
>>> 
>>> On Tue, Sep 20, 2016 at 3:50 AM, Julio Saura  wrote:
 Hello
 
 i am installing a new brand open shift origin cluster with centos 7.
 
 after installing docker engine ( from centos repo ) i check the version 
 and i am concerned about the result
 
 docker --version
 Docker version 1.10.3, build cb079f6-unsupported
 
 unsupported¿?
 
 is this normal?
 
 thanks
 
 
 
 
 
 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> 
>>> 
>>> 
>>> -- 
>>> Alex Wauck // DevOps Engineer
>>> 
>>> E X O S I T E 
>>> www.exosite.com 
>>> 
>>> Making Machines More Human.
>> 
> 
> 
> 
> -- 
> Alex Wauck // DevOps Engineer
> 
> E X O S I T E 
> www.exosite.com 
> 
> Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Keycloak as oauth provider

2016-09-20 Thread Charles Moulliard
As OpenID is supported by Keycloak, I'm fine to use (
https://docs.openshift.org/latest/install_config/configur
ing_authentication.html#OpenID) but that should be great to have an
openshift distro packaging keycloak.

I have started to create something about that
https://github.com/cmoulliard/tech-knowledge/blob/master/openshift-v3/cmd-generate-config.md#start-openshift-using-the-master-config-file-modified


On Tue, Sep 20, 2016 at 7:27 PM, Dan Mace  wrote:

>
>
> On Tue, Sep 20, 2016 at 1:23 PM, Charles Moulliard 
> wrote:
>
>> Hi,
>>
>> What is the status about the integration of Keycloak as Oauth provider
>> with OpenShift Origin ? Is it done - not done ? Still planned ?
>>
>> Regards,
>>
>> Charles
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
> ​Which authentication protocol are you interested in? You can integrate
> OpenShift with KeyCloak out of the box today using OpenID Connect.​ [1]
>
> [1] https://docs.openshift.org/latest/install_config/configu
> ring_authentication.html#OpenID
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Keycloak as oauth provider

2016-09-20 Thread Dan Mace
On Tue, Sep 20, 2016 at 1:23 PM, Charles Moulliard 
wrote:

> Hi,
>
> What is the status about the integration of Keycloak as Oauth provider
> with OpenShift Origin ? Is it done - not done ? Still planned ?
>
> Regards,
>
> Charles
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
​Which authentication protocol are you interested in? You can integrate
OpenShift with KeyCloak out of the box today using OpenID Connect.​ [1]

[1]
https://docs.openshift.org/latest/install_config/configuring_authentication.html#OpenID
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Keycloak as oauth provider

2016-09-20 Thread Charles Moulliard
Hi,

What is the status about the integration of Keycloak as Oauth provider with
OpenShift Origin ? Is it done - not done ? Still planned ?

Regards,

Charles
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Route on Service : RPC CALL

2016-09-20 Thread Clayton Coleman
Are your RPC calls based on HTTP, or a binary protocol?  The router only
supports traffic over HTTP, HTTPS, or TLS with Server-Name-Indication.

If you need to access a service from outside the cluster for non-HTTP
traffic generally you would create a service of type NodePort (which gives
you a port on all hosts) and to have your router machines act as your
gateways.  I.e. your service would be type NodePort, you'd get port 31000,
and then you could access "mycompany.com:31000" (the wildcard you assigned
to the routers) which would connect to your pods on the backend port.

On Tue, Sep 20, 2016 at 10:16 AM, Den Cowboy  wrote:

> Hi,
>
>
> We have a container which is exposing a port . We have to perform RPC
> calls on it.
> I can create a route on the port (https -> 8080) (map it on ) but this
> does not seem to work.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Version of the artifacts supported / OpenShift Release

2016-09-20 Thread Clayton Coleman
Swagger 2.0 (openapi) is a json schema - that will be released for 1.4 but
is retroactively valid for older versions for most endpoints.

We have built tooling around the limitations in swagger 1.2 to perform
validation an explanation - "oc create --validate=true ..." will check the
scheme against your submitted object, and "oc explain pod" / "oc explain
pod.spec" will let you explore the various fields.

On Sep 20, 2016, at 1:34 AM, Charles Moulliard  wrote:

Using the Swagger API is certainly interesting but not enough if by example
I would like to create a yaml or json file (BuildConfig, DeploymentConfig,
...)

A developer could use this doc (
https://docs.openshift.org/latest/rest_api/openshift_v1.html#v1-buildconfig,
https://docs.openshift.org/latest/rest_api/openshift_v1.html#v1-buildconfigspec,
...) to learn for the BuildConfig process the spec, fields to be used, what
are the mandatory/optional fields, their type, values, ...

But we should provide yaml/json reference files, json schemas (see -->
http://json-schema.org/) in order to help them to validate that the config
file used is compliant to the spec, that they are using the correct values
or simply to discover the features supported by example :
https://docs.openshift.org/latest/rest_api/openshift_v1.html#v1-buildstrategy

Make sense ?

On Mon, Sep 19, 2016 at 7:30 PM, Charles Moulliard 
wrote:

> The Swagger GUI is available at this address : https://openshift-server:
> 8443/swaggerapi
>
> https://www.dropbox.com/s/e0jeow7zoj70oyy/Screenshot%
> 202016-09-19%2019.30.14.png?dl=0
>
> On Mon, Sep 19, 2016 at 7:13 PM, Charles Moulliard 
> wrote:
>
>> Is swagger packaged with OpenShift Origin to list the operations, ... ?
>>
>> On Mon, Sep 19, 2016 at 6:45 PM, Clayton Coleman 
>> wrote:
>>
>>> That is generally the swagger docs (1.2 currently) listed here:
>>> https://github.com/openshift/origin/tree/master/api/swagger-spec
>>>
>>> On Mon, Sep 19, 2016 at 8:37 AM, Charles Moulliard 
>>> wrote:
>>>
 Hi,

 Is it defined somewhere for each OpenShift Artifacts (Template,
 DeploymentConfig, Buildconfig, ...), the version of the "syntax" supported
 according to the OpenShift Server (1.2, 1.3, ...) where it will be executed
 ?
 Syntax = Json or YAML Structure supported

 Example: The DeploymentConfig Template is described here https://
 docs.openshift.com/enterprise/latest/dev_guide/deplo
 yments.html#dev-guide-deployments for the Api Version = 1
 - Is Api  Version 1 the Api supported by OpenShift Origin 1.x ?
 OpenShift enterprise 3.x ?
 - Is it possible to have a Reference Template for each Artifact that we
 can use top of OpenShift ? Maybe from a Github repo ?

 Regards,

 Charles

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: docker version doubt

2016-09-20 Thread Julio Saura
nice to hear

is a weird version name although xD

thanks alex.

best regards

> El 20 sept 2016, a las 17:16, Alex Wauck  escribió:
> 
> I've seen the same thing myself.  It seems to cause some bad interactions 
> with image stream tags (i.e. sha256-based references result in pull 
> failures), but on the plus side, you can use all those images on Docker Hub 
> that were pushed with 1.10 or later.  On balance, I'd say it solves more 
> problems than it creates.  We're running our production OpenShift cluster 
> with 1.10, and it's worked out pretty well for us.
> 
> On Tue, Sep 20, 2016 at 3:50 AM, Julio Saura  > wrote:
> Hello
> 
> i am installing a new brand open shift origin cluster with centos 7.
> 
> after installing docker engine ( from centos repo ) i check the version and i 
> am concerned about the result
> 
> docker --version
> Docker version 1.10.3, build cb079f6-unsupported
> 
> unsupported¿?
> 
> is this normal?
> 
> thanks
> 
> 
> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 
> 
> 
> 
> -- 
> Alex Wauck // DevOps Engineer
> 
> E X O S I T E 
> www.exosite.com  
> 
> Making Machines More Human.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: docker version doubt

2016-09-20 Thread Alex Wauck
I've seen the same thing myself.  It seems to cause some bad interactions
with image stream tags (i.e. sha256-based references result in pull
failures), but on the plus side, you can use all those images on Docker Hub
that were pushed with 1.10 or later.  On balance, I'd say it solves more
problems than it creates.  We're running our production OpenShift cluster
with 1.10, and it's worked out pretty well for us.

On Tue, Sep 20, 2016 at 3:50 AM, Julio Saura  wrote:

> Hello
>
> i am installing a new brand open shift origin cluster with centos 7.
>
> after installing docker engine ( from centos repo ) i check the version
> and i am concerned about the result
>
> docker --version
> Docker version 1.10.3, build cb079f6-unsupported
>
> unsupported¿?
>
> is this normal?
>
> thanks
>
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Route on Service : RPC CALL

2016-09-20 Thread Den Cowboy
Hi,


We have a container which is exposing a port . We have to perform RPC calls 
on it.
I can create a route on the port (https -> 8080) (map it on ) but this does 
not seem to work.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deleting a pod sometimes deletes the docker container and sometimes it doesn't

2016-09-20 Thread v

All right, in that case aggregated logging it is.

Thank you very much for your support. :)


Am 2016-09-20 um 15:46 schrieb Andy Goldstein:

If the pod crashes immediately, they should be able to use 'oc debug' to try to 
determine what is happening. Otherwise, I would recommend using aggregated 
logging.

Andy

On Tue, Sep 20, 2016 at 3:07 AM, v > 
wrote:

Hello,

our use case is that the developers sometimes want to access the logs of 
crashed pods in order to see why their app crashed. That's not possible via the 
OpenShift Console, therefore we backup the logs of exited containers and make 
them accessible to the devs.

I think that this could be accomplished via aggregated logging too (could 
it?) but last time we tried that it turned out to be a very heavyweight 
solution that constantly required our attention and oversight.

Can you give us any recommendation?

Regards
v




Am 2016-09-19 um 15:57 schrieb Andy Goldstein:

When you delete a pod, its containers should **always** be deleted. If they 
are not, this is a bug.

Could you please elaborate what use case(s) you have for keeping the 
containers around?

Thanks,
Andy

On Mon, Sep 19, 2016 at 5:09 AM, v > wrote:

Hello,

we have an issue with docker/openshfit.
With Openshift 1.1.4 and Docker 1.8.2 we can delete a pod via "oc delete po" and the docker 
container is only stopped but not deleted. That means the container is still visible via "docker ps 
-a", /var/lib/docker/containers/[hash] still persits, "docker logs" still works etc.

With Openshift 1.1.6 and Docker 1.9.1 the docker container is sometimes DELETED 
when we delete a pod via "oc delete po" and sometimes it is just stopped like 
with Openshift 1.1.4/Docker 1.8.2.

Is there any way we can influence this? Ideally we would like "oc delete 
po" to just stop the pod but not delete it.

Regards
v


___
users mailing list
users@lists.openshift.redhat.com 

http://lists.openshift.redhat.com/openshiftmm/listinfo/users 








___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deleting a pod sometimes deletes the docker container and sometimes it doesn't

2016-09-20 Thread Andy Goldstein
If the pod crashes immediately, they should be able to use 'oc debug' to
try to determine what is happening. Otherwise, I would recommend using
aggregated logging.

Andy

On Tue, Sep 20, 2016 at 3:07 AM, v  wrote:

> Hello,
>
> our use case is that the developers sometimes want to access the logs of
> crashed pods in order to see why their app crashed. That's not possible via
> the OpenShift Console, therefore we backup the logs of exited containers
> and make them accessible to the devs.
>
> I think that this could be accomplished via aggregated logging too (could
> it?) but last time we tried that it turned out to be a very heavyweight
> solution that constantly required our attention and oversight.
>
> Can you give us any recommendation?
>
> Regards
> v
>
>
>
>
> Am 2016-09-19 um 15:57 schrieb Andy Goldstein:
>
> When you delete a pod, its containers should **always** be deleted. If
> they are not, this is a bug.
>
> Could you please elaborate what use case(s) you have for keeping the
> containers around?
>
> Thanks,
> Andy
>
> On Mon, Sep 19, 2016 at 5:09 AM, v  wrote:
>
>> Hello,
>>
>> we have an issue with docker/openshfit.
>> With Openshift 1.1.4 and Docker 1.8.2 we can delete a pod via "oc delete
>> po" and the docker container is only stopped but not deleted. That means
>> the container is still visible via "docker ps -a",
>> /var/lib/docker/containers/[hash] still persits, "docker logs" still
>> works etc.
>>
>> With Openshift 1.1.6 and Docker 1.9.1 the docker container is sometimes
>> DELETED when we delete a pod via "oc delete po" and sometimes it is just
>> stopped like with Openshift 1.1.4/Docker 1.8.2.
>>
>> Is there any way we can influence this? Ideally we would like "oc delete
>> po" to just stop the pod but not delete it.
>>
>> Regards
>> v
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: scenarios of entire app in a cluster unavailable

2016-09-20 Thread Brenton Leanhardt
On Mon, Sep 19, 2016 at 6:40 PM, Srinivas Naga Kotaru (skotaru)
 wrote:
> Trying to understand on which scenarios all the instances of an application
> running from cluster unavailable?
>
>
>
> OS upgrade failure??
>
> Openshift upgrade bugs/failures/downtime?

The best way to mitigate risks from the first two are to upgrade
independent sets of Nodes in batches to prevent downtime in the event
of unforeseen problems.  This should be rare if there is sufficient
monitoring in the environment.

In the Origin 1.4, OCP 3.4 timeframe it will be much easier to upgrade
batches of Nodes.  It's possible today but it takes a little more
involvement with the ansible inventory.  In large environments with
strict maintenance windows it's common to only update a set of Nodes
during each window.

>
> Router failures ??

This is likely the most common source of user-facing downtime.

>
> Keepalive containers failed??

Unless this event triggered a failover to a pod that was actually in
outage I don't think the Keepalive pod failing would cause a
user-facing outage.  The platform would spawn another.

>
> Floating IP shared by keepalive container had issues??

If somehow the floating IP was in use by another interface on the
network I'm certain bad things would happen.

>
> VXLAN bug or upgrade caused entire cluster network failure?

Catastrophic network failures could indeed cause a major outage.

>
> Human config error ( what those???)

Always.  Best avoided by using a tool like Ansible and testing changes
in other environments before production.

>
>
>
> Is above list accurate? Can we think off any other possible scanarios where
> whole application will be down in cluster duet to platform issues?
>

I would mention downtime caused by load.  Anecdotally, this is
probably the second most common cause of downtime.  It often relates
to the human error and lack of monitoring.  The more dense the
platform operators wish to keep the environment the more rigor is
needed for monitoring.

This could simply be an error of the pod owner as well.  eg, the JVM
inside the pod might be online however the application running in the
JVM might be throwing out of memory errors due to incorrect assignment
of limits.

>
>
> --
>
> Srinivas Kotaru
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pod does not have Scale up/down buttons

2016-09-20 Thread Skarbek, John
The reason this is occurring, is due to you utilizing a Pod definition. The 
purpose of the pod is to spin up one pod and do nothing else.

Checkout the documentation on creating a replication 
controller.
 Creating an Replication Controller, instead of a Pod, will allow you to 
perform the scale operation and maintain pods through a lifecycle.


--
John Skarbek


On September 19, 2016 at 15:22:49, Ravi Kapoor 
(ravikapoor...@gmail.com) wrote:

Once more, now with JSON

{
"kind": "List",
"apiVersion": "v1beta3",
"metadata": {},
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"labels": {
"name": "node-test"
},
"name": "node-test"
},
"spec": {
"containers": [
{
"image": "node:4.4.7",
"imagePullPolicy": "IfNotPresent",
"name": "node-test",
"command": [
"node"
],
"args": [
"/usr/src/app/server.js"
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/usr/src/app",
"name": "myclaim2"
}
],
"securityContext": {
"capabilities": {},
"privileged": false
},
"terminationMessagePath": "/dev/termination-log"
}
],
"volumes": [
{
"name": "myclaim2",
"persistentVolumeClaim": {
"claimName": "myclaim2"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"serviceAccount": ""
},
"status": {}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": null,
"name": "node-service"
},
"spec": {
"portalIP": "",
"ports": [
{
"name": "web",
"port": 8080,
"protocol": "TCP"
}
],
"selector": {
"name": "node-test"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"annotations": {},
"name": "node-route"
},
"spec": {
"to": {
"name": "node-service"
}
}
}
]
}

On Mon, Sep 19, 2016 at 2:19 PM, Ravi Kapoor 
> wrote:

I created following job definition. It successfully creates a service, pod and 
a route. I am able to access the website.

It shows 1 Pod running, however, there are no scale up/down buttons in the UI.
How can I scale this application up?


___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users=DQICAg=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=6l_MOhHqckZVxRAVlG5uYw1ZkOM5XppORXJ7qaoZCsk=zcRaAM1wl1KVuO50vZdVuVyCjyynVuwCFd-2Jc9ffAE=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


docker version doubt

2016-09-20 Thread Julio Saura
Hello

i am installing a new brand open shift origin cluster with centos 7.

after installing docker engine ( from centos repo ) i check the version and i 
am concerned about the result

docker --version
Docker version 1.10.3, build cb079f6-unsupported

unsupported¿?

is this normal?

thanks





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Modifying existing advanced installation

2016-09-20 Thread Lionel Orellana
Hello

I want to configure LDAP authentication on my existing cluster.

Instead of manually modifying the master config file, can I add the new
settings to my Ansible inventory and rerun the config playbook? Does it
know to only apply the new configuration? Generally speaking, is this the
best way to make changes to an existing cluster?

Thanks

Lionel.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deleting a pod sometimes deletes the docker container and sometimes it doesn't

2016-09-20 Thread v

Hello,

our use case is that the developers sometimes want to access the logs of 
crashed pods in order to see why their app crashed. That's not possible via the 
OpenShift Console, therefore we backup the logs of exited containers and make 
them accessible to the devs.

I think that this could be accomplished via aggregated logging too (could it?) 
but last time we tried that it turned out to be a very heavyweight solution 
that constantly required our attention and oversight.

Can you give us any recommendation?

Regards
v



Am 2016-09-19 um 15:57 schrieb Andy Goldstein:

When you delete a pod, its containers should **always** be deleted. If they are 
not, this is a bug.

Could you please elaborate what use case(s) you have for keeping the containers 
around?

Thanks,
Andy

On Mon, Sep 19, 2016 at 5:09 AM, v > 
wrote:

Hello,

we have an issue with docker/openshfit.
With Openshift 1.1.4 and Docker 1.8.2 we can delete a pod via "oc delete po" and the docker 
container is only stopped but not deleted. That means the container is still visible via "docker ps 
-a", /var/lib/docker/containers/[hash] still persits, "docker logs" still works etc.

With Openshift 1.1.6 and Docker 1.9.1 the docker container is sometimes DELETED when 
we delete a pod via "oc delete po" and sometimes it is just stopped like with 
Openshift 1.1.4/Docker 1.8.2.

Is there any way we can influence this? Ideally we would like "oc delete 
po" to just stop the pod but not delete it.

Regards
v


___
users mailing list
users@lists.openshift.redhat.com 
http://lists.openshift.redhat.com/openshiftmm/listinfo/users 





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users