I setup a private kubernetes on Ubuntu, I have few hosts
{192.168.2.10,192.168.2.11,192.168.2.12} ,
and some others not in k8s cluster but in the same LAN (192.168.2.13,
192.168.2.14) that run Database
problem is, Pods in k8s cluster can not reach the Database server, seem
there's no route.
What actually happens is that we use iptables to distinguish which LB
a packet came through, and do a local NAT to a pod backend for that
Service.
On Tue, Feb 21, 2017 at 5:46 AM, Rodrigo Campos wrote:
> I'd be surprised if it doesn't do port remapping on GKE.
> Service type load balancer include
Rudy,
The OpenID Connect client auth provider only caches in memory[0]. It
doesn't persist that information to disk. If you're not seeing the
request on subsequent invocations of kubectl then you're not
exercising the plugin. At least that's what the code would do if there
isn't a bug.
Does your
cc'ing Eric Chiang who worked on the caching code.
On Mon, Feb 20, 2017 at 7:09 AM Rudy Bonefas wrote:
> We have decided to use OpenID Connect with Kubectl and I have been in the
> process if writing an OpenID Connect server using the nimbusds java sdk.
> When kubectl first connects to my server
Thanks Rodrigo
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send e
HI :)
it's possible to create kubernetes pods with Murano api or Murano CLI
Thank you all
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to k
I think it is looking at the context's namespace and the embedded
namespace. If the two don't match, it has a grumble. Not a great UX.
On Tue, Feb 21, 2017 at 3:57 AM, wrote:
> Odd one this ?
>
> Previously the following worked:
>
> kubectl apply -f xyz
If you need to read and write AT THE SAME TIME, you need a PV that
supports it, which EBS and GCE (and every block device) do not.
If you want to be able to use the data in serial steps of a pipeline,
it should be fine, since it is only mounted in one mode at a given
point in time.
On Tue, Feb 21
I have a scheduler that submits Jobs to a k8s cluster. I'd like a Job (via a
Pod running on a Node) to be able to write data to a volume, and then have
another Job (via a Pod, potentially a different node) to read that data.
Reading about persistent volumes (PV) and PV Claims, it isn't clear to
I'd be surprised if it doesn't do port remapping on GKE.
Service type load balancer includes type node port that also includes type
cluster IP. So, as type load balancer includes type node port, that means
that all nodes in the cluster open a random port (let's say ) and when
a request comes to
On Thursday, 16 February 2017 21:18:42 UTC, Kubernetes learner wrote:
> what is the best strategy to deploy images to Kubernetes with YAML from a
> remote server (Jenkins)
We orchestrate all our provisioning of other GKE clusters, GCE, CloudSQL, Cloud
Storage, Networks etc from Jenkins running
Since GKE creates a fully managed master node many of the details are
hidden from the user (which has pros and cons). In essence the master
starts its processes on a node where it sets the cloud provider to GKE and
has API keys that has access to your account on GKE. There is a process
that monit
Odd one this ?
Previously the following worked:
kubectl apply -f xyz-deployment.yaml
kubectl apply -f xyz-service.yaml
kubectl apply -f xyz-ingress.yaml
* where by the custom namespace is defined inside each yaml.
Now it fails:
---
So I have finally found the root cause. It is mandatory to explicitly
specify http protocol in the endpoint definition:
kubeadm init --external-etcd-endpoints="http://msl-kub01:2379";
OR
kubeadm init --external-etcd-endpoints=http://msl-kub01:2379
2017-02-15 17:20 GMT+01:00 :
> Hi,
> I have tr
i have a cluster on GKE, and use the service-type-loadbalancer, with a
static-ip-address, and it works correctly.
i just do not understand how, and would like to change that.
when i kubectl-apply such a load-balancer, i see that:
- it creates a target-pool containing the nodes in the cluster
- it
15 matches
Mail list logo