How to create new Grafana dashboard in OKD 3.11?
Hi, On the OKD 3.11 docs site, I don't see any information about how to create a new dashboard. This was simple with 3.9 and 3.10. IIUC, OKD 3.11 ships with Grafana 5.2 and looking at Grafana documentation [1], it seems like if the user has enough permissions, (s)he should be able to create new dashboard. To be sure, I granted cluster:admin permission to the user I'm working with. But I still don't see an option to create new dashboard. Regards, Dharmit [1] http://docs.grafana.org/guides/whats-new-in-v5-2/ -- Dharmit Shah Red Hat Developer Tools (https://developers.redhat.com/) irc, mattermost: dharmit https://dharmitshah.com ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: OKD 3.9 to 3.10 upgrade failure on CentOS
Hi Dan, In-line responses below. On 30/11, Dan Pungă wrote: > Hi Dharmit, > > What you're experiencing looks a lot like a problem I had with the upgrade. > I ended up doing a fresh install. > > I've tried fiddling around with the ansible config and as I was trying to > get my head about what was happening I discovered an issue about node names. > With this reply from Michael Gugino that shed some light on the matter: > https://github.com/openshift/openshift-ansible/issues/9935#issuecomment-423268110 > > Basically my problem was that the upgrade playbook of OKD 3.10 expected that > the node names from the previously isntalled version be the short name > versions and not the FQDN. My understanding is that with 3.10 you are required to have proper DNS setup in the cluster. Inventory file needs to have the FQDN of the systems in cluster and not their IP addresses. > I guess I was precisely in your position and I really didn't know what else > to try except doing a fresh install. I have no idea if there is a way of > changing node names of a running cluster. Maybe someone who knows more about > the internals could be of help in this respect... I'm not sure how to change the node names either. But I *think* it could be done by removing a node from the cluster and then adding it back through scale-up playbook. There's documentation to do this. It's easier said than done but if you're careful, this is not entirely impossible. > Since I see your installation is also a fresh one, maybe it would worth > uninstalling 3.9 and installing the 3.10. Or maybe have a try at the newest > 3.11. This is my test environment where I can play however I wish to. Unfortunately, I can't do the same with production where we are supposed to upgrade as well. :( I managed to fix the issue in my test environment and am going to upgrade production cluster soon. Since 3.9 to 3.10 upgrade wasn't working, we planned to uninstall OKD by executing uninstall playbook (/usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml). We planned to re-use the Jenkins PV after successful 3.11 deployment. While doing 3.11 setup I faced this issue [1]. I decided to completely remove configuration for "kubeletArguments" from the hosts file. Configuring the kubelet arguments could be done by setting "openshift_node_kubelet_args" in 3.9. With 3.10, it's deprecated and has to be specified in "openshift_node_groups". I'm guessing I was doing something wrong there. Or maybe it's an issue with OKD documentation mentioning arguments for container garbage collection [2] that are not available in upstream kubelet documentation [3]. I have no clue! But after removing the kubeletArguments from "openshift_node_groups", 3.9 to 3.10 upgrade using the playbook went just fine! Hope that helps. :) Regards, Dharmit [1] https://github.com/openshift/openshift-ansible/issues/10774 [2] https://docs.okd.io/3.10/admin_guide/garbage_collection.html#container-garbage-collection [3] https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ > > Hope it helps, > > Dan > > On 20.11.2018 04:38, Dharmit Shah wrote: > > Hi, > > > > I'm trying to upgrade my OKD 3.9 cluster to 3.10 using > > openshift-ansible. I have already described the problem in detail and > > provided logs on the GitHub issue [1]. > > > > I could really use some help on this issue! > > > > Regards, > > Dharmit > > > > [1] https://github.com/openshift/openshift-ansible/issues/10690 > > -- Dharmit Shah Red Hat Developer Tools (https://developers.redhat.com/) irc, mattermost: dharmit https://dharmitshah.com ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: problem with installation of Okd 3.11
On 27/11, Erekle Magradze wrote: > Hi, > > Can you please drop the link to github issue here? Sorry for missing that out in my original response. Here's the link to the issue: https://github.com/openshift/openshift-ansible/issues/10690 Regards, Dharmit > On 11/27/18 3:02 PM, Dharmit Shah wrote: > > On 27/11, Erekle Magradze wrote: > > > Nov 26 06:55:55 os-master origin-node: E1126 > > > 06:55:53.495294 5353 kubelet.go:2101] Container runtime > > > network not ready: NetworkReady=false > > > reason:NetworkPluginNotReady message:docker: network plugin is > > > not ready: cni config uninitialized > > > Nov 26 06:55:58 os-master origin-node: W1126 06:55:58.496250 > > > 5353 cni.go:172] Unable to update cni config: No networks > > > found in /etc/cni/net.d > > I faced similar error while upgrading from OKD 3.9 to 3.10. While trying > > to bring up the master node, openshift-ansible always failed and > > journalctl had similar logs. Looking around, I figured that this was due > > to missing `80-openshift-network.conf` file under `/etc/cni/net.d` on > > the master node. Other nodes had this file. > > > > So I copy pasted the file from a node to master and then `oc get nodes` > > showed me "Ready" instead of "NotReady" for the master. I then tried to > > update again from 3.9 to 3.10 but would fail with same error and same > > issue. I opened a GitHub issue [1] with all the details I could find but > > I haven't received any help yet. > > > > However, in my case issue was with 3.9 to 3.10 upgrade. > > > > Regards, > > Dharmit > > -- Dharmit Shah Red Hat Developer Tools (https://developers.redhat.com/) irc, mattermost: dharmit https://dharmitshah.com ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
OKD 3.9 to 3.10 upgrade failure on CentOS
Hi, I'm trying to upgrade my OKD 3.9 cluster to 3.10 using openshift-ansible. I have already described the problem in detail and provided logs on the GitHub issue [1]. I could really use some help on this issue! Regards, Dharmit [1] https://github.com/openshift/openshift-ansible/issues/10690 -- Dharmit Shah Red Hat Developer Tools (https://developers.redhat.com/) irc, mattermost: dharmit https://dharmitshah.com ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: How to upgrade OpenShift cluster using openshift-ansible on CentOS?
Hi, I tried once again on a fresh setup. What I tried to do was, setup a 3.6 cluster and tried to upgrade it to 3.7. But it failed. Here's the hosts file [1] I used for setting up 3.6 cluster. The command I used was: $ ansible-playbook -i hosts openshift-ansible/playbooks/byo/config.yml Now, when trying to upgrade to 3.7, I just changed the references for 3.6 to 3.7 but, it failed. Here's the hosts file [2]. The command I used was: $ ansible-playbook -i hosts openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml What do I need to add/remove for cluster upgrade to work well? Regards, [1] http://pastebin.centos.org/557311/ [2] http://pastebin.centos.org/557321/ On 10/02, Dharmit Shah wrote: > Hi Scott, > > On 09/02, Scott Dodson wrote: > > Can you try openshift_release=v3.7 ? that variable is intended to indicate > > just the major.minor you wish to use. > > It doesn't work with that. Here's the error: > http://pastebin.centos.org/540661/ > > Regards, > > > > > Hope this helps, > > Scott > > > > On Fri, Feb 9, 2018 at 12:03 AM, Dharmit Shah <ds...@redhat.com> wrote: > > > > > Hi, > > > > > > I'm trying to upgrade my OpenShift cluster from v3.6 to v3.7. But it > > > fails with the error "openshift_release is 3.7.0 which is not a valid > > > release for a 3.7 upgrade" [1]. > > > > > > OpenShift cluster I have was brought up off the "release-3.6" branch of > > > openshift-ansible repo [2]. Ansible hosts file I am using is [3]. For > > > doing v3.6 deployment, I had set `openshift_release=v3.6.0` in the same > > > ansible hosts file. > > > > > > Command I'm using to do upgrade is: > > > > > > $ ansible-playbook > > > openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml > > > -vvv > > > > > > I executed above command after switching to "release-3.7" branch in the > > > openshift-ansible directory. > > > > > > Not sure what I'm missing here. Do I need to use a different value for > > > release? > > > > > > Regards, > > > > > > [1] http://pastebin.centos.org/538756/ > > > [2] https://github.com/openshift/openshift-ansible > > > [3] http://pastebin.centos.org/538746/ > > > -- > > > Dharmit Shah > > > Red Hat Developer Tools (https://developers.redhat.com/) > > > irc: dharmit (#devtools, #centos, #pune) > > > > > > ___ > > > users mailing list > > > users@lists.openshift.redhat.com > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > > > > -- > Dharmit Shah > Red Hat Developer Tools (https://developers.redhat.com/) > irc, mattermost: dharmit -- Dharmit Shah Red Hat Developer Tools (https://developers.redhat.com/) irc, mattermost: dharmit ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: How to upgrade OpenShift cluster using openshift-ansible on CentOS?
Hi Scott, On 09/02, Scott Dodson wrote: > Can you try openshift_release=v3.7 ? that variable is intended to indicate > just the major.minor you wish to use. It doesn't work with that. Here's the error: http://pastebin.centos.org/540661/ Regards, > > Hope this helps, > Scott > > On Fri, Feb 9, 2018 at 12:03 AM, Dharmit Shah <ds...@redhat.com> wrote: > > > Hi, > > > > I'm trying to upgrade my OpenShift cluster from v3.6 to v3.7. But it > > fails with the error "openshift_release is 3.7.0 which is not a valid > > release for a 3.7 upgrade" [1]. > > > > OpenShift cluster I have was brought up off the "release-3.6" branch of > > openshift-ansible repo [2]. Ansible hosts file I am using is [3]. For > > doing v3.6 deployment, I had set `openshift_release=v3.6.0` in the same > > ansible hosts file. > > > > Command I'm using to do upgrade is: > > > > $ ansible-playbook > > openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml > > -vvv > > > > I executed above command after switching to "release-3.7" branch in the > > openshift-ansible directory. > > > > Not sure what I'm missing here. Do I need to use a different value for > > release? > > > > Regards, > > > > [1] http://pastebin.centos.org/538756/ > > [2] https://github.com/openshift/openshift-ansible > > [3] http://pastebin.centos.org/538746/ > > -- > > Dharmit Shah > > Red Hat Developer Tools (https://developers.redhat.com/) > > irc: dharmit (#devtools, #centos, #pune) > > > > ___ > > users mailing list > > users@lists.openshift.redhat.com > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > -- Dharmit Shah Red Hat Developer Tools (https://developers.redhat.com/) irc, mattermost: dharmit ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
How to upgrade OpenShift cluster using openshift-ansible on CentOS?
Hi, I'm trying to upgrade my OpenShift cluster from v3.6 to v3.7. But it fails with the error "openshift_release is 3.7.0 which is not a valid release for a 3.7 upgrade" [1]. OpenShift cluster I have was brought up off the "release-3.6" branch of openshift-ansible repo [2]. Ansible hosts file I am using is [3]. For doing v3.6 deployment, I had set `openshift_release=v3.6.0` in the same ansible hosts file. Command I'm using to do upgrade is: $ ansible-playbook openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml -vvv I executed above command after switching to "release-3.7" branch in the openshift-ansible directory. Not sure what I'm missing here. Do I need to use a different value for release? Regards, [1] http://pastebin.centos.org/538756/ [2] https://github.com/openshift/openshift-ansible [3] http://pastebin.centos.org/538746/ -- Dharmit Shah Red Hat Developer Tools (https://developers.redhat.com/) irc: dharmit (#devtools, #centos, #pune) ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users