Re: How to grant system:admin rights to admin?
Hi Henryk, Not sure if this is applicable to your setup, but an alternative is to point oc to admin.kubeconfig. E.g.: oc --config /var/lib/origin/openshift.local.config/master/admin.kubeconfig adm policy add-cluster-role-to-user cluster-admin developer I've been using this way as 'oc login -u system:admin' didn't work with my dev setup (created using 'oc cluster up') for some reason. It seems to work when using minishift, so I'd love to know what's causing it as well. Hth, Ulf On 06. juni 2017 16:16, Henryk Konsek wrote: Hi Graham, That would be probably fine. I assume that I should log in as system:admin in order to execute those commands, right? The problem is that I cannot switch to system:admin... oc login -u system:admin Authentication required for https://localhost:8443 (openshift) Username: system:admin Password: error: username system:admin is invalid for basic auth Any idea what I'm doing wrong? Cheers! pon., 5 cze 2017 o 12:28 użytkownik Graham Dumpleton mailto:gdump...@redhat.com>> napisał: > On 5 Jun 2017, at 8:13 PM, Henryk Konsek mailto:hekon...@gmail.com>> wrote: > > Hi, > > Quick question. Is there an easy way to grant "system:admin" privileges to "admin" user? I'd like to make it possible for 'admin' user to list projects and namespaces for example. I'm aware that this is not recommended for production environment, but this is something we need for an automation of our integration tests suite. Not sure if suits your requirements, but presuming 'username' exists, as user who already has admin rights, try: oc adm policy add-cluster-role-to-user cluster-reader username If only want them to be able to read view stuff but not modify, or: oc adm policy add-cluster-role-to-user cluster-admin username if want to allow them full edit ability on cluster. Replace 'username' with actual name of user. Graham -- Henryk Konsek https://linkedin.com/in/hekonsek ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users -- Ulf ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: Copying annotations from DeploymentConfig to ReplicationController
https://github.com/openshift/origin/issues/12539 Thanks! On 18. jan. 2017 13:39, Michail Kargakis wrote: Yes, it is expected behavior. Propagating annotations to the replication controller does not sound unreasonable, can you open an issue about it? On Wed, Jan 18, 2017 at 1:16 PM, Ulf Lilleengen mailto:l...@redhat.com>> wrote: Hi, I have some custom annotations added to a DeploymentConfig and would like for them to be propagated/copied to the ReplicationController that is created. Currently, only the labels of the DeploymentConfig are propagated. Is this expected behavior? Attached an example where I want the annotation 'example: annotation1' to be propagated to the rc 'test-1'. Using origin 1.3.2. -- Ulf ___ users mailing list users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com> http://lists.openshift.redhat.com/openshiftmm/listinfo/users <http://lists.openshift.redhat.com/openshiftmm/listinfo/users> -- Ulf ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Copying annotations from DeploymentConfig to ReplicationController
Hi, I have some custom annotations added to a DeploymentConfig and would like for them to be propagated/copied to the ReplicationController that is created. Currently, only the labels of the DeploymentConfig are propagated. Is this expected behavior? Attached an example where I want the annotation 'example: annotation1' to be propagated to the rc 'test-1'. Using origin 1.3.2. -- Ulf annotation-propagate.yaml Description: application/yaml ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: Resolving localhost (IPv6 issue?)
sh-4.3$ dig +search localhost ; <<>> DiG 9.10.3-P4-RedHat-9.10.3-13.P4.fc23 <<>> +search localhost ;; global options: +cmd ;; connection timed out; no servers could be reached On 08/11/2016 08:12 PM, Clayton Coleman wrote: Can you run dig in a container on localhost (dig +search localhost, I think) and contrast that? Is it only one tool that fails to read /etc/hosts, or all tools? On Thu, Aug 11, 2016 at 2:00 PM, Ulf Lilleengen mailto:l...@redhat.com>> wrote: My bad. This was inside the same container, but it was one I ran manually with --net=host. Here is from the one running in openshift: sh-4.3$ cat /etc/resolv.conf search enmasse.svc.cluster.local svc.cluster.local cluster.local nameserver 192.168.1.17 options ndots:5 On 08/11/2016 07:44 PM, Clayton Coleman wrote: Sorry, I meant resolve.conf inside of one of your containers On Thu, Aug 11, 2016 at 1:35 PM, Ulf Lilleengen mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>>> wrote: [root@strappi /]# cat /etc/resolv.conf # Generated by NetworkManager search redhat.com <http://redhat.com> <http://redhat.com> nameserver 10.38.5.26 nameserver 10.35.255.14 nameserver 192.168.1.1 # NOTE: the libc resolver may not support more than 3 nameservers. # The nameservers listed below may not be recognized. nameserver 2001:4662:afe3:0:8a1f:a1ff:fe2d:34e6 Note that I set the disable_ipv6=1, but I didn't restart openshift/docker after doing it though. On 08/11/2016 05:48 PM, Clayton Coleman wrote: Tried this with Fedora 24 and very similar config (but centos7 image) and I'm able to ping localhost. $ cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1localhost ::1localhost ip6-localhost ip6-loopback fe00::0ip6-localnet fe00::0ip6-mcastprefix fe00::1ip6-allnodes fe00::2ip6-allrouters 172.17.0.2centoscentos7-debug I have net.ipv6.conf.all.disable_ipv6 = 1 set - should be possible. Can you provide your /etc/resolv.conf from inside that image? On Thu, Aug 11, 2016 at 11:29 AM, Ulf Lilleengen mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>> <mailto:l...@redhat.com <mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>>>> wrote: Host: OS: Fedora 24 Docker: 1.10.3 glibc-2.23.1-8 Docker image: Name: gordons/qdrouterd:v10 (based on Fedora 23) Glibc:glibc-2.22-11 Nothing special in the images other than that. The issue appeared without any significant change other than running the latest openshift/origin image. On 08/11/2016 04:58 PM, Clayton Coleman wrote: That is very strange. Anything special about the container (what OS, libraries, glibc version, musl)? What version of Docker was running? On Wed, Aug 10, 2016 at 3:45 AM, Ulf Lilleengen mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>> <mailto:l...@redhat.com <mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>>> <mailto:l...@redhat.com <mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>> <mailto:l...@redhat.com <mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>>>>> wrote: Hi, We were debugging an issue yesterday where 'localhost' could not be resolved inside a container in openshift origin v1.3.0-alpha.3. I'm not sure if this is openshift or kubernetes-related, but thought I'd ask here first. We have two containers running on a pod, and one container is connecting to the other using 'localhost'. This has
Re: Resolving localhost (IPv6 issue?)
My bad. This was inside the same container, but it was one I ran manually with --net=host. Here is from the one running in openshift: sh-4.3$ cat /etc/resolv.conf search enmasse.svc.cluster.local svc.cluster.local cluster.local nameserver 192.168.1.17 options ndots:5 On 08/11/2016 07:44 PM, Clayton Coleman wrote: Sorry, I meant resolve.conf inside of one of your containers On Thu, Aug 11, 2016 at 1:35 PM, Ulf Lilleengen mailto:l...@redhat.com>> wrote: [root@strappi /]# cat /etc/resolv.conf # Generated by NetworkManager search redhat.com <http://redhat.com> nameserver 10.38.5.26 nameserver 10.35.255.14 nameserver 192.168.1.1 # NOTE: the libc resolver may not support more than 3 nameservers. # The nameservers listed below may not be recognized. nameserver 2001:4662:afe3:0:8a1f:a1ff:fe2d:34e6 Note that I set the disable_ipv6=1, but I didn't restart openshift/docker after doing it though. On 08/11/2016 05:48 PM, Clayton Coleman wrote: Tried this with Fedora 24 and very similar config (but centos7 image) and I'm able to ping localhost. $ cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1localhost ::1localhost ip6-localhost ip6-loopback fe00::0ip6-localnet fe00::0ip6-mcastprefix fe00::1ip6-allnodes fe00::2ip6-allrouters 172.17.0.2centoscentos7-debug I have net.ipv6.conf.all.disable_ipv6 = 1 set - should be possible. Can you provide your /etc/resolv.conf from inside that image? On Thu, Aug 11, 2016 at 11:29 AM, Ulf Lilleengen mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>>> wrote: Host: OS: Fedora 24 Docker: 1.10.3 glibc-2.23.1-8 Docker image: Name: gordons/qdrouterd:v10 (based on Fedora 23) Glibc:glibc-2.22-11 Nothing special in the images other than that. The issue appeared without any significant change other than running the latest openshift/origin image. On 08/11/2016 04:58 PM, Clayton Coleman wrote: That is very strange. Anything special about the container (what OS, libraries, glibc version, musl)? What version of Docker was running? On Wed, Aug 10, 2016 at 3:45 AM, Ulf Lilleengen mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>> <mailto:l...@redhat.com <mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>>>> wrote: Hi, We were debugging an issue yesterday where 'localhost' could not be resolved inside a container in openshift origin v1.3.0-alpha.3. I'm not sure if this is openshift or kubernetes-related, but thought I'd ask here first. We have two containers running on a pod, and one container is connecting to the other using 'localhost'. This has worked fine for several months, but stopped working yesterday. We resolved the issue by using 127.0.0.1. We were also able to use the pod hostname as well. I'm thinking this might be related to IPv6, given that /etc/hosts seemed to contain IPv6 records for localhost, and the other container may be listening on IPv4 only. I tried disabling it with sysctl net.ipv6.conf.all.disable_ipv6=1 to verify, but I still saw the same issue. sh-4.3$ cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 172.17.0.6 controller-queue1-tnvav -- Ulf Lilleengen ___ users mailing list users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com> <mailto:users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>> <mailto:users@lists.openshift.redhat.com <mailt
Re: Resolving localhost (IPv6 issue?)
[root@strappi /]# cat /etc/resolv.conf # Generated by NetworkManager search redhat.com nameserver 10.38.5.26 nameserver 10.35.255.14 nameserver 192.168.1.1 # NOTE: the libc resolver may not support more than 3 nameservers. # The nameservers listed below may not be recognized. nameserver 2001:4662:afe3:0:8a1f:a1ff:fe2d:34e6 Note that I set the disable_ipv6=1, but I didn't restart openshift/docker after doing it though. On 08/11/2016 05:48 PM, Clayton Coleman wrote: Tried this with Fedora 24 and very similar config (but centos7 image) and I'm able to ping localhost. $ cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1localhost ::1localhost ip6-localhost ip6-loopback fe00::0ip6-localnet fe00::0ip6-mcastprefix fe00::1ip6-allnodes fe00::2ip6-allrouters 172.17.0.2centoscentos7-debug I have net.ipv6.conf.all.disable_ipv6 = 1 set - should be possible. Can you provide your /etc/resolv.conf from inside that image? On Thu, Aug 11, 2016 at 11:29 AM, Ulf Lilleengen mailto:l...@redhat.com>> wrote: Host: OS: Fedora 24 Docker: 1.10.3 glibc-2.23.1-8 Docker image: Name: gordons/qdrouterd:v10 (based on Fedora 23) Glibc:glibc-2.22-11 Nothing special in the images other than that. The issue appeared without any significant change other than running the latest openshift/origin image. On 08/11/2016 04:58 PM, Clayton Coleman wrote: That is very strange. Anything special about the container (what OS, libraries, glibc version, musl)? What version of Docker was running? On Wed, Aug 10, 2016 at 3:45 AM, Ulf Lilleengen mailto:l...@redhat.com> <mailto:l...@redhat.com <mailto:l...@redhat.com>>> wrote: Hi, We were debugging an issue yesterday where 'localhost' could not be resolved inside a container in openshift origin v1.3.0-alpha.3. I'm not sure if this is openshift or kubernetes-related, but thought I'd ask here first. We have two containers running on a pod, and one container is connecting to the other using 'localhost'. This has worked fine for several months, but stopped working yesterday. We resolved the issue by using 127.0.0.1. We were also able to use the pod hostname as well. I'm thinking this might be related to IPv6, given that /etc/hosts seemed to contain IPv6 records for localhost, and the other container may be listening on IPv4 only. I tried disabling it with sysctl net.ipv6.conf.all.disable_ipv6=1 to verify, but I still saw the same issue. sh-4.3$ cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 172.17.0.6 controller-queue1-tnvav -- Ulf Lilleengen ___ users mailing list users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com> <mailto:users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users <http://lists.openshift.redhat.com/openshiftmm/listinfo/users> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>> -- Ulf -- Ulf ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: Resolving localhost (IPv6 issue?)
Host: OS: Fedora 24 Docker: 1.10.3 glibc-2.23.1-8 Docker image: Name: gordons/qdrouterd:v10 (based on Fedora 23) Glibc:glibc-2.22-11 Nothing special in the images other than that. The issue appeared without any significant change other than running the latest openshift/origin image. On 08/11/2016 04:58 PM, Clayton Coleman wrote: That is very strange. Anything special about the container (what OS, libraries, glibc version, musl)? What version of Docker was running? On Wed, Aug 10, 2016 at 3:45 AM, Ulf Lilleengen mailto:l...@redhat.com>> wrote: Hi, We were debugging an issue yesterday where 'localhost' could not be resolved inside a container in openshift origin v1.3.0-alpha.3. I'm not sure if this is openshift or kubernetes-related, but thought I'd ask here first. We have two containers running on a pod, and one container is connecting to the other using 'localhost'. This has worked fine for several months, but stopped working yesterday. We resolved the issue by using 127.0.0.1. We were also able to use the pod hostname as well. I'm thinking this might be related to IPv6, given that /etc/hosts seemed to contain IPv6 records for localhost, and the other container may be listening on IPv4 only. I tried disabling it with sysctl net.ipv6.conf.all.disable_ipv6=1 to verify, but I still saw the same issue. sh-4.3$ cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 172.17.0.6 controller-queue1-tnvav -- Ulf Lilleengen ___ users mailing list users@lists.openshift.redhat.com <mailto:users@lists.openshift.redhat.com> http://lists.openshift.redhat.com/openshiftmm/listinfo/users <http://lists.openshift.redhat.com/openshiftmm/listinfo/users> -- Ulf ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Resolving localhost (IPv6 issue?)
Hi, We were debugging an issue yesterday where 'localhost' could not be resolved inside a container in openshift origin v1.3.0-alpha.3. I'm not sure if this is openshift or kubernetes-related, but thought I'd ask here first. We have two containers running on a pod, and one container is connecting to the other using 'localhost'. This has worked fine for several months, but stopped working yesterday. We resolved the issue by using 127.0.0.1. We were also able to use the pod hostname as well. I'm thinking this might be related to IPv6, given that /etc/hosts seemed to contain IPv6 records for localhost, and the other container may be listening on IPv4 only. I tried disabling it with sysctl net.ipv6.conf.all.disable_ipv6=1 to verify, but I still saw the same issue. sh-4.3$ cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 172.17.0.6 controller-queue1-tnvav -- Ulf Lilleengen ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users