We sort of want to move away from pre-populating env vars for services
- it has come up as a name-conflict problem for people, it is rather
noisy, and it doesn't get updated when a Service changes. Env vars
are a really sub-standard API for this.
On Wed, Aug 24, 2016 at 11:42 PM, Mayank wrote:
I'd you really think you need that, you can start with a sidecar container
in that pod that uses the kubernetes API (or kubectl) to look at that,
maybe.
On Thursday, August 25, 2016, Mayank wrote:
> Thanks Tim, i am asking for something simpler. We are already exposing a
> service host and port
Thanks Tim, i am asking for something simpler. We are already exposing a
service host and port to a pod as environment variables for all services in
that namespace. How about we also include exposing the nodeport information
as environment variable to the pod. Not necessarily the service that po
I don't think we want a mechanism for pods to know what service
NodePorts point to them. It would be too noisy (every node) and
that's just not a common pattern. If you need to register nodePorts,
I think you should do it as a controller pod that runs in the cluster,
reads the kube API and syncs
Hi Rodrigo/Tim
Does this seem like a use case where pods would like to know their
NodePorts assigned to them so that they could register it with an external
service discovery system specially when loadbalancers are not available ? I
dont want to hardcode the nodeports as well.
-Mayank
On Tuesd
Hi Rodrigo
This is within the companies infrastructure, the load balancer requires
integration with a cloud provider in our case probably that would be f5.
Yes that is one option but it requires lot more work in imo
On Tuesday, August 16, 2016 at 5:17:07 PM UTC-7, Rodrigo Campos wrote:
>
> Why n
Why node port? Why not load balancer, know the balancer by configuration or
something, and be done?
On Tuesday, August 16, 2016, Mayank wrote:
> actually you are right the example i gave was basically registering
> hostports. But we could also potentially do a service of type NodePort per
> pod
actually you are right the example i gave was basically registering
hostports. But we could also potentially do a service of type NodePort per
pod per node and try to register that NodePort as well. HostsPorts require
managing and NodePorts are managed by k8s itself, so it would be nice to
use
That didn't explain why you are registering NODE PORTS. Do you mean
HostPorts instead?
On Thu, Aug 11, 2016 at 10:13 PM, Mayank wrote:
> It might be just our internal infra, but the way we are slowly introducing
> k8s and containers , requires in the first cut for the clients living
> outside k8
It might be just our internal infra, but the way we are slowly introducing
k8s and containers , requires in the first cut for the clients living
outside k8s cluster to be able to access these redis nodes in k8s cluster.
The redis nodes will register their nodeports to the etcd/consul and then
t
On Tue, Aug 09, 2016 at 08:27:21PM -0700, Mayank wrote:
> Thanks Rodrigo.
> Basically we were evaluating a use case where a distributed application
> (redis based) reports its ip and ports to etcd for discovery by a client.
Why don't you use k8s service discovery?
Is "the client" running in the
NodePort is part of a Service. A Pod can be created before the
Service that points to it, or that Service can change.
why are you registering a nodeport rather than a ClusterIP or PodIPs ?
On Tue, Aug 9, 2016 at 8:27 PM, Mayank wrote:
> Thanks Rodrigo.
> Basically we were evaluating a use case
Thanks Rodrigo.
Basically we were evaluating a use case where a distributed application
(redis based) reports its ip and ports to etcd for discovery by a client.
Since this application is running as a container in k8s, i want this to
discover its nodeport and report it correctly to etcd. Yes i a
13 matches
Mail list logo