First, there's no way to expose a headless service externally
currently.  For NodePort it doesn't make sense, unless we assign a
different NodePort value per backend, and since that can change
dynamically it is not practical. Similarly, for LoadBalancer you would
need a different LB for every backend, with a different IP for each.
Nobody has yet shown me a reason for doing this.

A headless service is not really about "I don't want kube-proxy", it's
about "these backends are not fungible".  You can easily build
load-balancers that don't go through kube-proxy - you just have to do
the work that kube-proxy does to watch Endpoints and update the
backend-set.



On Tue, Jun 21, 2016 at 2:47 PM, Cole Mickens <[email protected]> wrote:
> If the goal is to avoid kube-proxy, then NodePort is _not_ the solution.
> NodePort connected to the clusterIP and/or still relied on kube-proxy. (I
> can convince myself of this easily: If NodePort is opened on every port, but
> my Pods behind the service are only running on a subset of the hosts, then
> kube-proxy must still be proxying traffic.)
>
> I think running the Pods with HostPort declared is another option. Then you
> would point your client at the hosts running those Pods. (But those hosts
> could change, so you might want to use a DaemonSet+NodePort to ensure it's
> always available on every host at the fixed HostPort.)
>
> I guess I'm curious why it is necessary to avoid kube-proxy?
>
> On Tue, Jun 21, 2016 at 2:19 PM, kant kodali <[email protected]> wrote:
>>
>> I am on AWS. Now I just added a tcp rule in my security group to open port
>> 30080. Now it says connection refused I suspect it might be key (.pem) that
>> is required to login? I am doing telnet though so I guess that can't be the
>> case..
>>
>> The endpoints are bounded! I can see a private IP bounded to the machine
>> where echo-server pod is running.
>>
>> On Tue, Jun 21, 2016 at 2:03 PM, Warren Strange <[email protected]>
>> wrote:
>>>
>>>
>>> Things to check:
>>>
>>> kubectl describe svc echo-server
>>>
>>> Make sure the service shows the endpoints are bound to a pod. If no
>>> endpoints are bound, it means your service selectors didnt work
>>>
>>> If you are on GCE, make sure the IP is the external IP (look on your
>>> console). I am not sure, but that might be different than the IP reported by
>>> describe node.
>>>
>>> Make sure firewall ports are opened up
>>>
>>>
>>>
>>> On Tuesday, June 21, 2016 at 2:51:23 PM UTC-6, kant kodali wrote:
>>>>
>>>> Hi William,
>>>>
>>>> So here is what I tried
>>>>
>>>> This is my service config.yaml
>>>>
>>>> apiVersion: v1
>>>> kind: Service
>>>> metadata:
>>>>   name: echo-server
>>>> spec:
>>>>   ports:
>>>>     - port: 11111
>>>>       nodePort: 30080
>>>>       name: "echo-server"
>>>>   selector:
>>>>     app: echo-server
>>>>   clusterIP: None
>>>>   type: NodePort
>>>>
>>>>
>>>> and then I did kubectl describe node to get the external IP of the
>>>> machine (as I want clients who are external to the cluster should be able 
>>>> to
>>>> connect)  where "echo-server" pod is running and finally did
>>>>
>>>> telnet <externalIP> 30080 // This didn't quite work. any idea?
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jun 21, 2016 at 1:35 PM, kant kodali <[email protected]> wrote:
>>>>>
>>>>> Thanks! I assume you meant IP:Nodeport.
>>>>>
>>>>> On Tue, Jun 21, 2016 at 12:14 PM, Warren Strange <[email protected]>
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> NodePorts are exposed on every node in the cluster. If you node has an
>>>>>> externally reachable IP, you can reach your service through nodeport:IP
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tuesday, June 21, 2016 at 1:04:15 PM UTC-6, kant kodali wrote:
>>>>>>>
>>>>>>> Hi Warren,
>>>>>>>
>>>>>>> I want my headless service to be visible by the clients external to
>>>>>>> the cluster but I am not sure how? (I understand through NodePorts it is
>>>>>>> accessible within the cluster) . The clients who are external to the 
>>>>>>> cluster
>>>>>>> will do the load balancing themselves.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Kant
>>>>>>>
>>>>>>> On Sun, Jun 19, 2016 at 1:14 PM, Warren Strange
>>>>>>> <[email protected]> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> You can expose your service through a NodePort - which will be
>>>>>>>> available on all nodes in the cluster. You need to open up the relevant
>>>>>>>> firewall ports.
>>>>>>>>
>>>>>>>> If you want to do your own load balancing see:
>>>>>>>>
>>>>>>>> https://blog.oestrich.org/2016/01/nodeport-kubernetes-load-balancer/
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Jun 19, 2016 at 1:38 PM kant kodali <[email protected]>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> HI Warren,
>>>>>>>>>
>>>>>>>>> Yes I want my headless service to be internet accessible. Also our
>>>>>>>>> requirement is that we dont want to attach a load balancer because we 
>>>>>>>>> dont
>>>>>>>>> want to go through the kube-proxy instead we want to do the load 
>>>>>>>>> balancing
>>>>>>>>> ourselves. is this a possibility with kubernetes?
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Kant
>>>>>>>>>
>>>>>>>>> On Sun, Jun 19, 2016 at 10:47 AM, Warren Strange
>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If you just want to talk to a service from your desktop, you can
>>>>>>>>>> often use  kubectl proxy, or kubectl port-forward to forward local 
>>>>>>>>>> traffic
>>>>>>>>>> to the cluster.
>>>>>>>>>>
>>>>>>>>>> If you want a real DNS name, and your service to be internet
>>>>>>>>>> accessible, you need to setup an ingress resource.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Saturday, June 18, 2016 at 3:17:19 PM UTC-6, kant kodali wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hi Guys,
>>>>>>>>>>>
>>>>>>>>>>> Is there a way to get a public DNS or something of a headless
>>>>>>>>>>> service inside my kubernetes cluster so that I can talk to the pods 
>>>>>>>>>>> backed
>>>>>>>>>>> by my headless service from my home computer per say?
>>>>>>>>>>>
>>>>>>>>>>> Thanks!
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>>>>> the Google Groups "Containers at Google" group.
>>>>>>>>>> To unsubscribe from this topic, visit
>>>>>>>>>> https://groups.google.com/d/topic/google-containers/rRKvj4-uPdI/unsubscribe.
>>>>>>>>>> To unsubscribe from this group and all its topics, send an email
>>>>>>>>>> to [email protected].
>>>>>>>>>> To post to this group, send email to [email protected].
>>>>>>>>>> Visit this group at
>>>>>>>>>> https://groups.google.com/group/google-containers.
>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>>>> the Google Groups "Containers at Google" group.
>>>>>>>>> To unsubscribe from this topic, visit
>>>>>>>>> https://groups.google.com/d/topic/google-containers/rRKvj4-uPdI/unsubscribe.
>>>>>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>>>>>> [email protected].
>>>>>>>>> To post to this group, send email to [email protected].
>>>>>>>>> Visit this group at
>>>>>>>>> https://groups.google.com/group/google-containers.
>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>
>>>>>>>> --
>>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>>> the Google Groups "Containers at Google" group.
>>>>>>>> To unsubscribe from this topic, visit
>>>>>>>> https://groups.google.com/d/topic/google-containers/rRKvj4-uPdI/unsubscribe.
>>>>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>>>>> [email protected].
>>>>>>>> To post to this group, send email to [email protected].
>>>>>>>> Visit this group at
>>>>>>>> https://groups.google.com/group/google-containers.
>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to a topic in the
>>>>>> Google Groups "Containers at Google" group.
>>>>>> To unsubscribe from this topic, visit
>>>>>> https://groups.google.com/d/topic/google-containers/rRKvj4-uPdI/unsubscribe.
>>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>>> [email protected].
>>>>>> To post to this group, send email to [email protected].
>>>>>> Visit this group at https://groups.google.com/group/google-containers.
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>>
>>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "Containers at Google" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/google-containers/rRKvj4-uPdI/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at https://groups.google.com/group/google-containers.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Containers at Google" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/google-containers.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Containers at Google" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/google-containers.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Containers at Google" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/google-containers.
For more options, visit https://groups.google.com/d/optout.

Reply via email to