yes i didn't know about that!! Now, when creating a new cluster at GCP, the
Legacy
Authorization is disabled but default.
So when I enabled it. I was able to implement my old configurations with no
issues.
thanks for your link, I will look into that and try to test it.
On Friday, May 18, 2018
I recommend just following the tutorial at
https://cloud.google.com/community/tutorials/nginx-ingress-gke to install
nginx-ingress on GKE.
It goes through both RBAC enabled and disabled instructions.
On Fri, May 18, 2018 at 3:06 PM Montassar Dridi
wrote:
> Hello
>
>
Hello
I'm using google kubenretes engine. My cluster node version is 1.7. Since
that version became unsupported by Google cloud for creating new cluster, I
need to use 1.8 or 1.9.
I'm having issues implimenting my nginx-ingress-controller yaml file at the
new versioned cluster but I keep
Hi!
Can you check my yaml files?
Steps:
1. Download
https://github.com/RouR/ToDo-ToBuy/blob/fc419b5c116d62edb61c5202e37513a9ee12a98d/k8s/
or
https://github.com/RouR/ToDo-ToBuy/blob/5a3991ffd22761f28df8120845a68d7030d10fd0/k8s/
2. kubectl create -f ./
3. cd dev
4. kubectl create -f ./
5. wait
Hi,
I have the following nginx-ingress-controller Deployment (replicas: 2)
running on kubeadmin cluster:
https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.9.0-beta.15/examples/deployment/nginx/kubeadm/nginx-ingress-controller.yaml
But how can do I expose this outside the
I am using the nginx ingress controller on two k8s clusters. On one the
HTTPS works as expected, but on the other HTTPS traffic always routes to
the default 404 backend. I'm not sure how to troubleshoot this.
I have the TLS secret setup and the ingress references it. The ingress
controller
Hello
I'm using tomcat (apache) sever to run my java application within
kubernetes Deployment
when i used to expose the pods with a loadbalancer service and enabling
sessionaffinity I could run multiple application pods with no problem.
now that I'm using nginx ingress controller the website
Hi there.
I've just deloyed a Kubernetes cluster on 3 Ubuntu 16.04 virtual machines with
kubeadm following this doc:
http://kubernetes.io/docs/getting-started-guides/kubeadm/
I'm using Weave as network overlay, so I do not pass any argument to kubeadm
init.
By the end of the doc everything
I was working on setting up an on premises kubernetes cluster using
vagrant. The cluster is the default multi VM cluster which I set up
following the docs at
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html that
uses CoreOS.
Following the instructions at