That is not the requirement. We want the communication via ssh tunnel.

On Fri, Sep 13, 2019 at 4:50 PM M. Manna <manme...@gmail.com> wrote:

> why not try using internal vs external traffic
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic
>
>
> if you set EXTERNAL enndpoints and map it to SSL - you clients should only
> receive EXTERNAL endpoints for comms. Does this sound okay for you?
>
> Thanks,
>
> On Fri, 13 Sep 2019 at 06:41, Akshay Das <aks...@fivetran.com> wrote:
>
>> We cannot use external endpoints because of security reasons.
>> Is there an option to tell zookeeper/broker not to send broker host
>> detail metadata to its clients?
>>
>> On Thu, Sep 12, 2019 at 3:05 PM M. Manna <manme...@gmail.com> wrote:
>>
>>> Have you tried using EXTERNAL endpoints for your Kafka broker to separate
>>> TLS from internal traffic? Also, have you checked zk admin whether the
>>> broker metadata is exposing your TLS endpoints to clients ?
>>>
>>>
>>> On Thu, 12 Sep 2019 at 10:23, Akshay Das <aks...@fivetran.com.invalid>
>>> wrote:
>>>
>>> > Hi Team,
>>> >
>>> > I'm trying to consume from a kafka cluster using java client, but the
>>> kafka
>>> > server can only be accessed via jumphost/ssh tunnel. But even after
>>> > creating ssh tunnel we are not able to read because once conusmer
>>> fetches
>>> > metadata it uses original hosts to connect to broker. Is it possible to
>>> > stop this behaviour?
>>> >
>>> > Thanks,
>>> > Akshay Das
>>> >
>>>
>>

Reply via email to