Kafka SSH Tunnel Connection without editing hostfile

2019-11-08 Thread Akshay Das
Hi Team,

I'm trying to consume from a kafka cluster using java client, but the kafka
server can only be accessed via jumphost/ssh tunnel. But even after
creating ssh tunnel we are not able to read because once consumer fetches
metadata it uses original hosts to connect to broker. Is it possible to
stop this behaviour?  Also we don't want to edit the hosts file in the
local machine.

Thanks,
Akshay Das


Re: Kafka SSH Tunnel Connection without editing hostfile

2019-09-13 Thread Akshay Das
That is not the requirement. We want the communication via ssh tunnel.

On Fri, Sep 13, 2019 at 4:50 PM M. Manna  wrote:

> why not try using internal vs external traffic
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic
>
>
> if you set EXTERNAL enndpoints and map it to SSL - you clients should only
> receive EXTERNAL endpoints for comms. Does this sound okay for you?
>
> Thanks,
>
> On Fri, 13 Sep 2019 at 06:41, Akshay Das  wrote:
>
>> We cannot use external endpoints because of security reasons.
>> Is there an option to tell zookeeper/broker not to send broker host
>> detail metadata to its clients?
>>
>> On Thu, Sep 12, 2019 at 3:05 PM M. Manna  wrote:
>>
>>> Have you tried using EXTERNAL endpoints for your Kafka broker to separate
>>> TLS from internal traffic? Also, have you checked zk admin whether the
>>> broker metadata is exposing your TLS endpoints to clients ?
>>>
>>>
>>> On Thu, 12 Sep 2019 at 10:23, Akshay Das 
>>> wrote:
>>>
>>> > Hi Team,
>>> >
>>> > I'm trying to consume from a kafka cluster using java client, but the
>>> kafka
>>> > server can only be accessed via jumphost/ssh tunnel. But even after
>>> > creating ssh tunnel we are not able to read because once conusmer
>>> fetches
>>> > metadata it uses original hosts to connect to broker. Is it possible to
>>> > stop this behaviour?
>>> >
>>> > Thanks,
>>> > Akshay Das
>>> >
>>>
>>


Re: Kafka SSH Tunnel Connection without editing hostfile

2019-09-12 Thread Akshay Das
We cannot use external endpoints because of security reasons.
Is there an option to tell zookeeper/broker not to send broker host detail
metadata to its clients?

On Thu, Sep 12, 2019 at 3:05 PM M. Manna  wrote:

> Have you tried using EXTERNAL endpoints for your Kafka broker to separate
> TLS from internal traffic? Also, have you checked zk admin whether the
> broker metadata is exposing your TLS endpoints to clients ?
>
>
> On Thu, 12 Sep 2019 at 10:23, Akshay Das 
> wrote:
>
> > Hi Team,
> >
> > I'm trying to consume from a kafka cluster using java client, but the
> kafka
> > server can only be accessed via jumphost/ssh tunnel. But even after
> > creating ssh tunnel we are not able to read because once conusmer fetches
> > metadata it uses original hosts to connect to broker. Is it possible to
> > stop this behaviour?
> >
> > Thanks,
> > Akshay Das
> >
>


Kafka SSH Tunnel Connection without editing hostfile

2019-09-12 Thread Akshay Das
Hi Team,

I'm trying to consume from a kafka cluster using java client, but the kafka
server can only be accessed via jumphost/ssh tunnel. But even after
creating ssh tunnel we are not able to read because once conusmer fetches
metadata it uses original hosts to connect to broker. Is it possible to
stop this behaviour?

Thanks,
Akshay Das