Re: Error updating deployment [deploy] status to Pending

2016-05-19 Thread Skarbek, John
Philippe,

Is the node in a Ready state?

The log output you posted makes it seem like something isn’t working properly 
if it keeps reading a config file over and over.

Are you able to start pods that do not utilize a PV?


--
John Skarbek


On May 19, 2016 at 16:43:16, Philippe Lafoucrière 
(philippe.lafoucri...@tech-angels.com)
 wrote:

If I make this node unschedulable, I get an event: "node 'openshift-node-1' is 
not in cache"
Does this ring a bell? (and the pod is still pending)
For the record, there's PV used in this pod, and all pods have the same 
behavior now on this cluster. Only a restart of origin-node can unlock them.
​Thanks
___
users mailing list
users@lists.openshift.redhat.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users=DQICAg=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=jLh1X9RvUZ4CzmdQ7elsOeY1CR-qoLimd877VlHjB6k=9WbmAPFRTGUQFaa5BonKWxuoR1_O_IEXYHq6UiaFwGg=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: containters with host volumes from controllers

2016-05-19 Thread Clayton Coleman
Users don't have a "preferred namespace", you'll have to provide that
yourself.  oc project sets it in the config.  You can use the -n flag to
set it.

On May 19, 2016, at 11:36 AM, Alan Jones  wrote:

Of all the command's I've tried, I think the following from another tread
did the magic:
oadm policy add-scc-to-user privileged -z default
In addition, I had to provide kubelet with --allow-privileged=true, which
wasn't required in the stock K8 1.2 config.
Perhaps OpenShift is adding something to the pod spec that kubelet is
validating.
What I'd really like to do now is wipe the OpenShift config to rerun
'atomic-openshift-installer install' and confirm the particular steps that
make it work.
If you have any insight into the best way to wipe my OpenShift config,
please share.

On getting my user and project; the replication set is submitted by one of
our system daemons that is run out of systemd with our own user and the
node certificates I described earlier.
Looking at the CLI code, it seems the command 'oc project' gets it from the
context before any REST API call is made.
However, 'oc whoami' seems to call GET on the 'users' resource with the
name '~'.
Can my daemon can make that call and get the project name or namespace from
the user details?

Thank for helping me get this right!
Alan

On Wed, May 18, 2016 at 7:43 PM, Clayton Coleman 
wrote:

> The node is running as a user, but every pod / rc has to be created in
> a namespace (or project, which is the same thing but with some
> additional controls).  When you create an RC from your credentials,
> you are either creating it in the "default" namespace (in which case
> you need to grant system:serviceaccount:default:default access to
> hostmount-anyuid) or in whatever namespace was the default.  If you
> run "oc project", which project does it say you are in?
>
> On Wed, May 18, 2016 at 8:16 PM, Alan Jones  wrote:
> > I now reproduced the issue with OpenShift 3.2 on RHEL 7, as apposed to my
> > few week old origin on CentOS.
> > Unfortunately, my magic command isn't working.
> > Here is my procedure:
> > 1) Create node certs with `oadm create-node-config`
> > 2) Use these certs from said node to create a replication set for a
> > container that requires a host mount.
> > 3) See event with 'hostPath volumes are not allowed to be used'
> > Note, this process works with standard Kubernetes; so navigating the
> > OpenShift authentication & permissions is what I'm trying to accomplish.
> > Also note that there is not *project* specified in this procedure; the
> node
> > being certified belongs to system:node, should I use that?
> > I feel like I'm flying blind because there is no feedback:
> > 1) The command to add privileges doesn't verify that the project or user
> > exists.
> > 2) The failure doesn't tell me which project/user was attempting to do
> the
> > unpermitted task.
> > Alan
> > [root@alan-lnx ~]# cat /etc/redhat-release
> > Red Hat Enterprise Linux Server release 7.2 (Maipo)
> > [root@alan-lnx ~]# openshift version
> > openshift v3.2.0.20
> > kubernetes v1.2.0-36-g4a3f9c5
> > etcd 2.2.5
> >
> >
> > On Wed, May 18, 2016 at 3:08 PM, Alan Jones  wrote:
> >>
> >> I think I'm making progress:
> >> oadm policy add-scc-to-user hostmount-anyuid
> >> system:serviceaccount:openshift-infra:default
> >> Now when I submit the replica set I get a different mount error that I
> >> think I understand.
> >> Note, the context I'm submitting the request in is using the node host
> >> certs under /openshift.local/config/ to the API server.
> >> There is no specified project.
> >> Thank you!
> >> Alan
> >>
> >> On Wed, May 18, 2016 at 2:48 PM, Clayton Coleman 
> >> wrote:
> >>>
> >>>
> >>>
> >>> On May 18, 2016, at 5:26 PM, Alan Jones  wrote:
> >>>
> >>> > oadm policy ... -z default
> >>> In the version of openshift origin I'm using the oadm command doesn't
> >>> take '-z'.
> >>> Can you fill in the dot, dot, dot for me?
> >>> I'm trying to grant permission for host volume access for a pod created
> >>> by the replication controller which was submitted with node
> credentials to
> >>> the API server.
> >>> Here is my latest failed attempt to try to follow your advice:
> >>> oadm policy add-scc-to-group hostmount-anyuid
> >>> system:serviceaccount:default
> >>> Again, this would be much easier if I could get logs for what group and
> >>> user it is evaluating when it fails.
> >>> Alan
> >>>
> >>>
> >>> system:serviceaccount:NAMESPACE:default
> >>>
> >>> Since policy is global, you have to identify which namespace/project
> >>> contains the "default" service account (service accounts are scoped to
> a
> >>> project).
> >>>
> >>>
> >>> On Tue, May 17, 2016 at 5:46 PM, Clayton Coleman 
> >>> wrote:
> 
>  You need to grant the permission to a service account for the pod
> (which
>  is "default" if you don't fill in 

Re: Error updating deployment [deploy] status to Pending

2016-05-19 Thread Philippe Lafoucrière
If I make this node unschedulable, I get an event: "node 'openshift-node-1'
is not in cache"
Does this ring a bell? (and the pod is still pending)
For the record, there's PV used in this pod, and all pods have the same
behavior now on this cluster. Only a restart of origin-node can unlock them.
​Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem with start container from image registry

2016-05-19 Thread holo holo
one more log connected with same thing:


  E0519 11:00:09.0997122098 pod_workers.go:138] Error syncing pod
5d3c48a1-1dd2-11e6-a164-525400c36a07, skipping: failed to "StartContainer"
for "testapp4" with ErrImagePull: "API error (500): Get
http://172.30.236.174:5000/v2/: dial tcp 172.30.236.174:5000: getsockopt:
no route to host\n"


//robert

On Thu, May 19, 2016 at 4:55 PM, holo holo  wrote:

> Hello all.
>
> I configured openshift and everything is working properly on host where
> docker-register is started. When i added new node and i try to deploy
> containers on it i have such error in logs:
>
> E0519 10:51:38.5741522135 pod_workers.go:138] Error syncing pod
> 083b958e-1dc0-11e6-8ca2-525400c36a07, skipping: failed to "StartContainer"
> for "testapp4" with ImagePullBackOff: "Back-off pulling image \"
> 172.30.236.174:5000/test/testapp4@sha256:64c3dc4cb983986a1dd5a7979f03f449b089f4baaf979b67363a92aac43e49cd\
> 
> ""
>
> I'm guessing problem is with it that new node not "see" docker-registry
> address 172.30.236.174 which is deployed on other node. Should i do
> something more with new node (i just started openshift with node config)?
>
> Best regards
> Robert
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Problem with start container from image registry

2016-05-19 Thread holo holo
Hello all.

I configured openshift and everything is working properly on host where
docker-register is started. When i added new node and i try to deploy
containers on it i have such error in logs:

E0519 10:51:38.5741522135 pod_workers.go:138] Error syncing pod
083b958e-1dc0-11e6-8ca2-525400c36a07, skipping: failed to "StartContainer"
for "testapp4" with ImagePullBackOff: "Back-off pulling image \"
172.30.236.174:5000/test/testapp4@sha256:64c3dc4cb983986a1dd5a7979f03f449b089f4baaf979b67363a92aac43e49cd\
""

I'm guessing problem is with it that new node not "see" docker-registry
address 172.30.236.174 which is deployed on other node. Should i do
something more with new node (i just started openshift with node config)?

Best regards
Robert
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users