Are you using kubernetes networking in these pods or host networking?
If you are using Kubernetes networking which network solution are you using
to setup pod networking on AWS?
On Mon, Mar 20, 2017 at 11:08 PM Vadim Solovey wrote:
> We have cluster's workers with multiple ENI (AWS network inte
We have cluster's workers with multiple ENI (AWS network interfaces). Some
of the pods need to bind to let's say first interface while other pods need
to bind to 2nd interface.
This is due the fact we are running a layer of about 20 proxies passing our
traffic to the world and these proxies need t
Use the Prometheus Operator, it will glue Prometheus to the Kubernetes
service discovery system enabling you to use label queries to find and
scrape your applications.
https://github.com/coreos/prometheus-operator
Cheers,
Brandon
On Mon, Mar 20, 2017 at 7:28 PM wrote:
> Hello
>
> I'm trying t
On Tue, Mar 21, 2017 at 1:43 AM, wrote:
> Im about to perform an OS upgrade on a K8s cluster and was hoping to know the
> best practices for doing so.
>
> I heard from others that a few bad experiences happened when one host node
> was upgraded ( OS patch applied ) and then the Master was unabl
Hello
I'm trying to integrate prometheus to kubernetes with a service discovery to
show every container deployed to prometheus metrics can you help me with what I
need to make it works and if is posible giving me an example, I've been looking
for examples and documentation for that but I cant f
If you want layer 5/7 balancing, you can use Linkerd for load balancing
both internal and ingress traffic. Take a look at these blog posts for some
examples:
https://blog.buoyant.io/2016/10/04/a-service-mesh-for-
kubernetes-part-i-top-line-service-metrics/
https://blog.buoyant.io/2016/11/18/a-serv
The trick is that every network is unique, so you have to fill in the
blanks on how to get traffic into your cluster from the outside. That
could be through a load-balancer (as in GCE or AWS) or it could be
through node IPs or something else. what you have will sort of limit
what you can do.
To
Yes, you can have load balancing in your bare metal.
In cloud environments, *to expose your app to the internet* you use a
service type load balancer (that creates a lb managed by the cloud
provider) or an ingress type.
If the app is only gonna be used by another in-cluster app, then no lb is
nee
On Mon, Mar 20, 2017 at 02:06:38PM -0400, Junaid Subhani wrote:
> I see what you say and understand it. But my requirement here is not to
> upgrade the Kubernetes version.
>
> It is simply to apply OS patches on nodes of an an already running cluster
> with minimal downtime for the application.
O
I don't quite understand your request. Can you just rely on the Linux route
table to handle this?
On Mon, Mar 20, 2017 at 1:27 PM wrote:
> Hi Guys,
>
> I've been going over the documents and couldn't find a a clear answer.
>
> We have a k8s service that require accessing an external service API,
This talk from Next17 is fantastic for learning how networking works in
K8s, including load balancer topics: https://www.youtube.
com/watch?v=y2bhV81MfKQ
Without the cloud load balancer, you'll need some way to map the external
IP & port to your service NodePort.
On Mon, Mar 20, 2017 at 3:08 PM,
I set up a Kubernetes cluster in my VMware environment, and I am trying to
understand how load balancing works.
If I have a setup with:
- Master node
- Slave node1
- Slave node2
on node1 and node2 runs the same pod. Does Kubernetes perform load balancing
between the two nodes? I read on the int
Hi Guys,
I've been going over the documents and couldn't find a a clear answer.
We have a k8s service that require accessing an external service API, and it is
require to be able to do so from several different public IPs.
Is there a way to achieve this? Something like attaching multiple NICs t
I see what you say and understand it. But my requirement here is not to
upgrade the Kubernetes version.
It is simply to apply OS patches on nodes of an an already running cluster
with minimal downtime for the application.
On Mon, Mar 20, 2017 at 1:56 PM, Rodrigo Campos wrote:
> On Mon, Mar 20,
On Mon, Mar 20, 2017 at 10:43:31AM -0700, ijunaidsubh...@gmail.com wrote:
> Im about to perform an OS upgrade on a K8s cluster and was hoping to know the
> best practices for doing so.
>
> I heard from others that a few bad experiences happened when one host node
> was upgraded ( OS patch appli
Im about to perform an OS upgrade on a K8s cluster and was hoping to know the
best practices for doing so.
I heard from others that a few bad experiences happened when one host node was
upgraded ( OS patch applied ) and then the Master was unable to see it ( some
JSON incompatibility issue ).
I had the same exact issue. You shouldn't play with ip masq. vxlan backed
flannel relies on it to pass messages between pods on different nodes.
What worked for me was to set the following properties in the hdfs-site.xml:
dfs.client.use.datanode.hostname true
dfs.datanode.use.datanode.hostna
17 matches
Mail list logo