Re: Option --bootstrap-server not working with kafka-acls.sh or kafka-configs.sh

2021-07-14 Thread Deepak Raghav
Hi

Use it with  --command-config client_security.properties and pass below
type configurations in properties file:-

sasl.mechanism=PLAIN

security.protocol=SASL_SSL

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \

username="*" \

password="*";



On Wed, 14 Jul, 2021, 6:11 pm Benardi Nunes, 
wrote:

> Hello to all,
> I've noticed that when trying to use kafka-acls.sh or kafka-configs.sh by
> providing the option --bootstrap-server it always leads to the error
> java.util.concurrent.TimeoutException.
> The same commands worked without an issue when providing
> --authorizer-properties zookeeper.connect and --zookeeper respectively.
> I've tested this on different types of listeners such as SSL, PLAINTEXT,
> SASL_PLAINTEXT, and SASL_SSL with no difference.
>
> Unfortunately, for kafka-configs.sh the --zookeeper option is being
> deprecated and it's not possible to use it to update configurations on a
> broker level hence my conundrum.
> Does anyone have some insight into what may be happening? I've tested this
> with kafka_2.12-2.8.0, kafka_2.12-2.7.0, kafka_2.12-2.5.0 and
> kafka_2.12-2.4.0.
>
> Best regards, José Benardi de Souza Nunes
>
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Deepak Raghav
Congratulations

On Mon, 19 Oct, 2020, 11:02 pm Bill Bejeck,  wrote:

> Congratulations Chia-Ping!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax  wrote:
>
> > Congrats Chia-Ping!
> >
> > On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > I'm happy to announce that Chia-Ping Tsai has accepted his invitation
> to
> > > become an Apache Kafka committer.
> > >
> > > Chia-Ping has been contributing to Kafka since March 2018 and has made
> 74
> > > commits:
> > >
> > > https://github.com/apache/kafka/commits?author=chia7712
> > >
> > > He's also authored several major improvements, participated in the KIP
> > > discussion and PR reviews as well. His major feature development
> > includes:
> > >
> > > * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> > > * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> > > * KIP-331: Add default implementation to close() and configure() for
> > serde
> > > * KIP-367: Introduce close(Duration) to Producer and AdminClients
> > > * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> > command
> > >
> > > In addition, Chia-Ping has demonstrated his great diligence fixing test
> > > failures, his impressive engineering attitude and taste in fixing
> tricky
> > > bugs while keeping simple designs.
> > >
> > > Please join me to congratulate Chia-Ping for all the contributions!
> > >
> > >
> > > -- Guozhang
> > >
> >
>


Re: Handle exception in kafka stream

2020-09-01 Thread Deepak Raghav
Hi John

Please find my inline response below

 Regards and Thanks
Deepak Raghav



On Tue, Sep 1, 2020 at 8:22 PM John Roesler  wrote:

> Hi Deepak,
>
> It sounds like you're saying that the exception handler is
> correctly indicating that Streams should "Continue", and
> that if you stop the app after handling an exceptional
> record but before the next commit, Streams re-processes the
> record?
> *Deepak Raghav : Your understaing is correct, but what I have seen is that
> if it got a valid record after getting some exceptional record(bad message)
> and app got stopped,  after restart it doesn't not read those bad message
> again in handle method.   *



> If that's what you're seeing, then it's how the system is
> designed to operate. We don't commit after every record
> because the overhead would be enormous. If you can't
> tolerate seeing duplicates in your error file, then it
> sounds like the simplest thing for you would be to maintain
> an index of the records you have already saved off so that
> you can gracefully handle re-processing. E.g., you might
> have a separate file per topic-partition that you update
> after appending to your error log to indicate the highest
> offset you've handled. Then, you can read it from the
> exception handler to see if the record you're handling is
> already logged. Just an idea.
>


> * Deepak : Thanks for suggesting the approach. *




> I hope this helps,
> -John
>
> On Tue, 2020-09-01 at 16:36 +0530, Deepak Raghav wrote:
> > Hi Team
> >
> > Just a reminder.
> > Can you please help me with this?
> >
> > Regards and Thanks
> > Deepak Raghav
> >
> >
> >
> > On Tue, Sep 1, 2020 at 1:44 PM Deepak Raghav 
> > wrote:
> >
> > > Hi Team
> > >
> > > I have created a CustomExceptionHandler class by
> > > implementing DeserializationExceptionHandler interface to handle the
> > > exception during deserialization time.
> > >
> > > But the problem with this approach is that if there is some exception
> > > raised for some record and after that stream is stopped and
> > > restarted again, it reads those bad messages again.
> > >
> > > I am storing those bad messages in some file in the filesystem and with
> > > this approach, duplicate messages are getting appended in the file
> when the
> > > stream is started since those bad message's offset are not getting
> > > increased.
> > >
> > > Please let me know if I missed anything.
> > >
> > > Regards and Thanks
> > > Deepak Raghav
> > >
> > >
>
>


Re: Handle exception in kafka stream

2020-09-01 Thread Deepak Raghav
Hi Team

Just a reminder.
Can you please help me with this?

Regards and Thanks
Deepak Raghav



On Tue, Sep 1, 2020 at 1:44 PM Deepak Raghav 
wrote:

> Hi Team
>
> I have created a CustomExceptionHandler class by
> implementing DeserializationExceptionHandler interface to handle the
> exception during deserialization time.
>
> But the problem with this approach is that if there is some exception
> raised for some record and after that stream is stopped and
> restarted again, it reads those bad messages again.
>
> I am storing those bad messages in some file in the filesystem and with
> this approach, duplicate messages are getting appended in the file when the
> stream is started since those bad message's offset are not getting
> increased.
>
> Please let me know if I missed anything.
>
> Regards and Thanks
> Deepak Raghav
>
>


Handle exception in kafka stream

2020-09-01 Thread Deepak Raghav
Hi Team

I have created a CustomExceptionHandler class by
implementing DeserializationExceptionHandler interface to handle the
exception during deserialization time.

But the problem with this approach is that if there is some exception
raised for some record and after that stream is stopped and
restarted again, it reads those bad messages again.

I am storing those bad messages in some file in the filesystem and with
this approach, duplicate messages are getting appended in the file when the
stream is started since those bad message's offset are not getting
increased.

Please let me know if I missed anything.

Regards and Thanks
Deepak Raghav


Re: Upgrade connectors logging from log4j to log4j2

2020-07-28 Thread Deepak Raghav
Hi Tom

Can you please reply to this.

Regards and Thanks
Deepak Raghav



On Mon, Jul 27, 2020 at 10:05 PM Deepak Raghav 
wrote:

> Hi Tom
>
> I have to change the log level at runtime i.e without restarting the
> worker process.
>
> Can you please share any suggestion on this with log4j.
>
> Regards and Thanks
> Deepak Raghav
>
>
>
> On Mon, Jul 27, 2020 at 7:09 PM Tom Bentley  wrote:
>
>> Hi Deepak,
>>
>> https://issues.apache.org/jira/browse/KAFKA-9366 already proposes to
>> replace log4j with log4j2 throughout Kafka.
>>
>> Kind regards,
>>
>> Tom
>>
>> On Mon, Jul 27, 2020 at 2:04 PM Deepak Raghav 
>> wrote:
>>
>> > Hi Team
>> >
>> > Request you to please have a look.
>> >
>> > Regards and Thanks
>> > Deepak Raghav
>> >
>> >
>> >
>> > On Thu, Jul 23, 2020 at 6:42 PM Deepak Raghav > >
>> > wrote:
>> >
>> > > Hi Team
>> > >
>> > > I have some source connector, which is using the logging provided by
>> > > kafka-connect framework.
>> > >
>> > > Now I need to change the log level dynamically at runtime i.e without
>> > > restarting the worker process.
>> > >
>> > > I found out that changing log level is not possible with log4j, so I
>> > > decided to upgrade to log4j2, could you please help me with this.
>> > >
>> > >
>> > > Regards and Thanks
>> > > Deepak Raghav
>> > >
>> > >
>> >
>>
>


Re: Upgrade connectors logging from log4j to log4j2

2020-07-27 Thread Deepak Raghav
Hi Tom

I have to change the log level at runtime i.e without restarting the worker
process.

Can you please share any suggestion on this with log4j.

Regards and Thanks
Deepak Raghav



On Mon, Jul 27, 2020 at 7:09 PM Tom Bentley  wrote:

> Hi Deepak,
>
> https://issues.apache.org/jira/browse/KAFKA-9366 already proposes to
> replace log4j with log4j2 throughout Kafka.
>
> Kind regards,
>
> Tom
>
> On Mon, Jul 27, 2020 at 2:04 PM Deepak Raghav 
> wrote:
>
> > Hi Team
> >
> > Request you to please have a look.
> >
> > Regards and Thanks
> > Deepak Raghav
> >
> >
> >
> > On Thu, Jul 23, 2020 at 6:42 PM Deepak Raghav 
> > wrote:
> >
> > > Hi Team
> > >
> > > I have some source connector, which is using the logging provided by
> > > kafka-connect framework.
> > >
> > > Now I need to change the log level dynamically at runtime i.e without
> > > restarting the worker process.
> > >
> > > I found out that changing log level is not possible with log4j, so I
> > > decided to upgrade to log4j2, could you please help me with this.
> > >
> > >
> > > Regards and Thanks
> > > Deepak Raghav
> > >
> > >
> >
>


Re: Upgrade connectors logging from log4j to log4j2

2020-07-27 Thread Deepak Raghav
Hi Team

Request you to please have a look.

Regards and Thanks
Deepak Raghav



On Thu, Jul 23, 2020 at 6:42 PM Deepak Raghav 
wrote:

> Hi Team
>
> I have some source connector, which is using the logging provided by
> kafka-connect framework.
>
> Now I need to change the log level dynamically at runtime i.e without
> restarting the worker process.
>
> I found out that changing log level is not possible with log4j, so I
> decided to upgrade to log4j2, could you please help me with this.
>
>
> Regards and Thanks
> Deepak Raghav
>
>


Upgrade connectors logging from log4j to log4j2

2020-07-23 Thread Deepak Raghav
Hi Team

I have some source connector, which is using the logging provided by
kafka-connect framework.

Now I need to change the log level dynamically at runtime i.e without
restarting the worker process.

I found out that changing log level is not possible with log4j, so I
decided to upgrade to log4j2, could you please help me with this.


Regards and Thanks
Deepak Raghav


Re: Kafka Connect Connector Tasks Uneven Division

2020-06-12 Thread Deepak Raghav
Hi Robin

Request you to please reply.

Regards and Thanks
Deepak Raghav



On Wed, Jun 10, 2020 at 11:57 AM Deepak Raghav 
wrote:

> Hi  Robin
>
> Can you please reply.
>
> I just want to add one more thing, that yesterday I tried with
> connect.protocal=eager. Task distribution was balanced after that.
>
> Regards and Thanks
> Deepak Raghav
>
>
>
> On Tue, Jun 9, 2020 at 2:37 PM Deepak Raghav 
> wrote:
>
>> Hi Robin
>>
>> Thanks for your reply and accept my apology for the delayed response.
>>
>> As you suggested that we should have a separate worker cluster based on
>> workload pattern. But as you said, task allocation is nondeterministic, so
>> same things can happen in the new cluster.
>>
>> Please let me know if my understanding is correct or not.
>>
>> Regards and Thanks
>> Deepak Raghav
>>
>>
>>
>> On Tue, May 26, 2020 at 8:20 PM Robin Moffatt  wrote:
>>
>>> The KIP for the current rebalancing protocol is probably a good
>>> reference:
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-415:+Incremental+Cooperative+Rebalancing+in+Kafka+Connect
>>>
>>>
>>> --
>>>
>>> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>>>
>>>
>>> On Tue, 26 May 2020 at 14:25, Deepak Raghav 
>>> wrote:
>>>
>>> > Hi Robin
>>> >
>>> > Thanks for the clarification.
>>> >
>>> > As you suggested, that task allocation between the workers is
>>> > nondeterministic. I have shared the same information within in my team
>>> but
>>> > there are some other parties, with whom I need to share this
>>> information as
>>> > explanation for the issue raised by them and I cannot show this mail
>>> as a
>>> > reference.
>>> >
>>> > It would be very great if you please share any link/discussion
>>> reference
>>> > regarding the same.
>>> >
>>> > Regards and Thanks
>>> > Deepak Raghav
>>> >
>>> >
>>> >
>>> > On Thu, May 21, 2020 at 2:12 PM Robin Moffatt 
>>> wrote:
>>> >
>>> > > I don't think you're right to assert that this is "expected
>>> behaviour":
>>> > >
>>> > > >  the tasks are divided in below pattern when they are first time
>>> > > registered
>>> > >
>>> > > Kafka Connect task allocation is non-determanistic.
>>> > >
>>> > > I'm still not clear if you're solving for a theoretical problem or an
>>> > > actual one. If this is an actual problem that you're encountering and
>>> > need
>>> > > a solution to then since the task allocation is not deterministic it
>>> > sounds
>>> > > like you need to deploy separate worker clusters based on the
>>> workload
>>> > > patterns that you are seeing and machine resources available.
>>> > >
>>> > >
>>> > > --
>>> > >
>>> > > Robin Moffatt | Senior Developer Advocate | ro...@confluent.io |
>>> @rmoff
>>> > >
>>> > >
>>> > > On Wed, 20 May 2020 at 21:29, Deepak Raghav <
>>> deepakragha...@gmail.com>
>>> > > wrote:
>>> > >
>>> > > > Hi Robin
>>> > > >
>>> > > > I had gone though the link you provided, It is not helpful in my
>>> case.
>>> > > > Apart from this, *I am not getting why the tasks are divided in
>>> *below
>>> > > > pattern* when they are *first time registered*, which is expected
>>> > > behavior.
>>> > > > I*s there any parameter which we can pass in worker property file
>>> which
>>> > > > handle the task assignment strategy like we have range assigner or
>>> > round
>>> > > > robin in consumer-group ?
>>> > > >
>>> > > > connector rest status api result after first registration :
>>> > > >
>>> > > > {
>>> > > >   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
>>> > > >   "connector": {
>>> > > > "state": "RUNNING",
>>> > > > "worker_id": "10.0.0.5:*8080*&q

Re: Kafka Connect Connector Tasks Uneven Division

2020-06-09 Thread Deepak Raghav
Hi  Robin

Can you please reply.

I just want to add one more thing, that yesterday I tried with
connect.protocal=eager. Task distribution was balanced after that.

Regards and Thanks
Deepak Raghav



On Tue, Jun 9, 2020 at 2:37 PM Deepak Raghav 
wrote:

> Hi Robin
>
> Thanks for your reply and accept my apology for the delayed response.
>
> As you suggested that we should have a separate worker cluster based on
> workload pattern. But as you said, task allocation is nondeterministic, so
> same things can happen in the new cluster.
>
> Please let me know if my understanding is correct or not.
>
> Regards and Thanks
> Deepak Raghav
>
>
>
> On Tue, May 26, 2020 at 8:20 PM Robin Moffatt  wrote:
>
>> The KIP for the current rebalancing protocol is probably a good reference:
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-415:+Incremental+Cooperative+Rebalancing+in+Kafka+Connect
>>
>>
>> --
>>
>> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>>
>>
>> On Tue, 26 May 2020 at 14:25, Deepak Raghav 
>> wrote:
>>
>> > Hi Robin
>> >
>> > Thanks for the clarification.
>> >
>> > As you suggested, that task allocation between the workers is
>> > nondeterministic. I have shared the same information within in my team
>> but
>> > there are some other parties, with whom I need to share this
>> information as
>> > explanation for the issue raised by them and I cannot show this mail as
>> a
>> > reference.
>> >
>> > It would be very great if you please share any link/discussion reference
>> > regarding the same.
>> >
>> > Regards and Thanks
>> > Deepak Raghav
>> >
>> >
>> >
>> > On Thu, May 21, 2020 at 2:12 PM Robin Moffatt 
>> wrote:
>> >
>> > > I don't think you're right to assert that this is "expected
>> behaviour":
>> > >
>> > > >  the tasks are divided in below pattern when they are first time
>> > > registered
>> > >
>> > > Kafka Connect task allocation is non-determanistic.
>> > >
>> > > I'm still not clear if you're solving for a theoretical problem or an
>> > > actual one. If this is an actual problem that you're encountering and
>> > need
>> > > a solution to then since the task allocation is not deterministic it
>> > sounds
>> > > like you need to deploy separate worker clusters based on the workload
>> > > patterns that you are seeing and machine resources available.
>> > >
>> > >
>> > > --
>> > >
>> > > Robin Moffatt | Senior Developer Advocate | ro...@confluent.io |
>> @rmoff
>> > >
>> > >
>> > > On Wed, 20 May 2020 at 21:29, Deepak Raghav > >
>> > > wrote:
>> > >
>> > > > Hi Robin
>> > > >
>> > > > I had gone though the link you provided, It is not helpful in my
>> case.
>> > > > Apart from this, *I am not getting why the tasks are divided in
>> *below
>> > > > pattern* when they are *first time registered*, which is expected
>> > > behavior.
>> > > > I*s there any parameter which we can pass in worker property file
>> which
>> > > > handle the task assignment strategy like we have range assigner or
>> > round
>> > > > robin in consumer-group ?
>> > > >
>> > > > connector rest status api result after first registration :
>> > > >
>> > > > {
>> > > >   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
>> > > >   "connector": {
>> > > > "state": "RUNNING",
>> > > > "worker_id": "10.0.0.5:*8080*"
>> > > >   },
>> > > >   "tasks": [
>> > > > {
>> > > >   "id": 0,
>> > > >   "state": "RUNNING",
>> > > >   "worker_id": "10.0.0.4:*8078*"
>> > > > },
>> > > > {
>> > > >   "id": 1,
>> > > >   "state": "RUNNING",
>> > > >   "worker_id": "10.0.0.5:*8080*"
>> > > > }
>> > > >   ],
>> > > >   "type": "

Re: Kafka Connect Connector Tasks Uneven Division

2020-06-09 Thread Deepak Raghav
Hi Robin

Thanks for your reply and accept my apology for the delayed response.

As you suggested that we should have a separate worker cluster based on
workload pattern. But as you said, task allocation is nondeterministic, so
same things can happen in the new cluster.

Please let me know if my understanding is correct or not.

Regards and Thanks
Deepak Raghav



On Tue, May 26, 2020 at 8:20 PM Robin Moffatt  wrote:

> The KIP for the current rebalancing protocol is probably a good reference:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-415:+Incremental+Cooperative+Rebalancing+in+Kafka+Connect
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Tue, 26 May 2020 at 14:25, Deepak Raghav 
> wrote:
>
> > Hi Robin
> >
> > Thanks for the clarification.
> >
> > As you suggested, that task allocation between the workers is
> > nondeterministic. I have shared the same information within in my team
> but
> > there are some other parties, with whom I need to share this information
> as
> > explanation for the issue raised by them and I cannot show this mail as a
> > reference.
> >
> > It would be very great if you please share any link/discussion reference
> > regarding the same.
> >
> > Regards and Thanks
> > Deepak Raghav
> >
> >
> >
> > On Thu, May 21, 2020 at 2:12 PM Robin Moffatt 
> wrote:
> >
> > > I don't think you're right to assert that this is "expected behaviour":
> > >
> > > >  the tasks are divided in below pattern when they are first time
> > > registered
> > >
> > > Kafka Connect task allocation is non-determanistic.
> > >
> > > I'm still not clear if you're solving for a theoretical problem or an
> > > actual one. If this is an actual problem that you're encountering and
> > need
> > > a solution to then since the task allocation is not deterministic it
> > sounds
> > > like you need to deploy separate worker clusters based on the workload
> > > patterns that you are seeing and machine resources available.
> > >
> > >
> > > --
> > >
> > > Robin Moffatt | Senior Developer Advocate | ro...@confluent.io |
> @rmoff
> > >
> > >
> > > On Wed, 20 May 2020 at 21:29, Deepak Raghav 
> > > wrote:
> > >
> > > > Hi Robin
> > > >
> > > > I had gone though the link you provided, It is not helpful in my
> case.
> > > > Apart from this, *I am not getting why the tasks are divided in
> *below
> > > > pattern* when they are *first time registered*, which is expected
> > > behavior.
> > > > I*s there any parameter which we can pass in worker property file
> which
> > > > handle the task assignment strategy like we have range assigner or
> > round
> > > > robin in consumer-group ?
> > > >
> > > > connector rest status api result after first registration :
> > > >
> > > > {
> > > >   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
> > > >   "connector": {
> > > > "state": "RUNNING",
> > > > "worker_id": "10.0.0.5:*8080*"
> > > >   },
> > > >   "tasks": [
> > > > {
> > > >   "id": 0,
> > > >   "state": "RUNNING",
> > > >   "worker_id": "10.0.0.4:*8078*"
> > > > },
> > > > {
> > > >   "id": 1,
> > > >   "state": "RUNNING",
> > > >   "worker_id": "10.0.0.5:*8080*"
> > > > }
> > > >   ],
> > > >   "type": "sink"
> > > > }
> > > >
> > > > and
> > > >
> > > > {
> > > >   "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
> > > >   "connector": {
> > > > "state": "RUNNING",
> > > > "worker_id": "10.0.0.4:*8078*"
> > > >   },
> > > >   "tasks": [
> > > > {
> > > >   "id": 0,
> > > >   "state": "RUNNING",
> > > >   "worker_id": "10.0.0.4:*8078*"
> > > > },
> > > > {
> > 

Re: Single Task running both worker node

2020-06-01 Thread Deepak Raghav
Hi Team

Just a Gentle Reminder.

Regards and Thanks
Deepak Raghav



On Fri, May 29, 2020 at 1:15 PM Deepak Raghav 
wrote:

> Hi Team
>
> Recently, I had seen strange behavior in kafka-connect. We have source
> connector with single task only, which reads from S3 bucket and copy to
> kafka topic.We have two worker node in a cluster, so at any point of time a
> task can be assigned to single worker node.
>
> I saw in logs that both the worker node were processing/ reading the data
> from S3 bucket, which should be impossible since we have configured that a
> single task should be created and read the data.
>
> Is it possible in any scenario that due to worker process restarting
> multiple times or registering/ de-registering the connector, a task can be
> left assigned in both the worker node.
>
> Note : I have seen this only one time, after that it was never reproduced.
>
> Regards and Thanks
> Deepak Raghav
>
>


Single Task running both worker node

2020-05-29 Thread Deepak Raghav
Hi Team

Recently, I had seen strange behavior in kafka-connect. We have source
connector with single task only, which reads from S3 bucket and copy to
kafka topic.We have two worker node in a cluster, so at any point of time a
task can be assigned to single worker node.

I saw in logs that both the worker node were processing/ reading the data
from S3 bucket, which should be impossible since we have configured that a
single task should be created and read the data.

Is it possible in any scenario that due to worker process restarting
multiple times or registering/ de-registering the connector, a task can be
left assigned in both the worker node.

Note : I have seen this only one time, after that it was never reproduced.

Regards and Thanks
Deepak Raghav


Re: Kafka Connect Connector Tasks Uneven Division

2020-05-26 Thread Deepak Raghav
Hi Robin

Thanks for the clarification.

As you suggested, that task allocation between the workers is
nondeterministic. I have shared the same information within in my team but
there are some other parties, with whom I need to share this information as
explanation for the issue raised by them and I cannot show this mail as a
reference.

It would be very great if you please share any link/discussion reference
regarding the same.

Regards and Thanks
Deepak Raghav



On Thu, May 21, 2020 at 2:12 PM Robin Moffatt  wrote:

> I don't think you're right to assert that this is "expected behaviour":
>
> >  the tasks are divided in below pattern when they are first time
> registered
>
> Kafka Connect task allocation is non-determanistic.
>
> I'm still not clear if you're solving for a theoretical problem or an
> actual one. If this is an actual problem that you're encountering and need
> a solution to then since the task allocation is not deterministic it sounds
> like you need to deploy separate worker clusters based on the workload
> patterns that you are seeing and machine resources available.
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Wed, 20 May 2020 at 21:29, Deepak Raghav 
> wrote:
>
> > Hi Robin
> >
> > I had gone though the link you provided, It is not helpful in my case.
> > Apart from this, *I am not getting why the tasks are divided in *below
> > pattern* when they are *first time registered*, which is expected
> behavior.
> > I*s there any parameter which we can pass in worker property file which
> > handle the task assignment strategy like we have range assigner or round
> > robin in consumer-group ?
> >
> > connector rest status api result after first registration :
> >
> > {
> >   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
> >   "connector": {
> > "state": "RUNNING",
> > "worker_id": "10.0.0.5:*8080*"
> >   },
> >   "tasks": [
> > {
> >   "id": 0,
> >   "state": "RUNNING",
> >   "worker_id": "10.0.0.4:*8078*"
> > },
> > {
> >   "id": 1,
> >   "state": "RUNNING",
> >   "worker_id": "10.0.0.5:*8080*"
> > }
> >   ],
> >   "type": "sink"
> > }
> >
> > and
> >
> > {
> >   "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
> >   "connector": {
> > "state": "RUNNING",
> > "worker_id": "10.0.0.4:*8078*"
> >   },
> >   "tasks": [
> > {
> >   "id": 0,
> >   "state": "RUNNING",
> >   "worker_id": "10.0.0.4:*8078*"
> > },
> > {
> >   "id": 1,
> >   "state": "RUNNING",
> >   "worker_id": "10.0.0.5:*8080*"
> > }
> >   ],
> >   "type": "sink"
> > }
> >
> >
> > But when I stop the second worker process and wait for
> > scheduled.rebalance.max.delay.ms time i.e 5 min to over, and start the
> > process again. Result is different.
> >
> > {
> >   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
> >   "connector": {
> > "state": "RUNNING",
> > "worker_id": "10.0.0.5:*8080*"
> >   },
> >   "tasks": [
> > {
> >   "id": 0,
> >   "state": "RUNNING",
> >   "worker_id": "10.0.0.5:*8080*"
> > },
> > {
> >   "id": 1,
> >   "state": "RUNNING",
> >   "worker_id": "10.0.0.5:*8080*"
> > }
> >   ],
> >   "type": "sink"
> > }
> >
> > and
> >
> > {
> >   "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
> >   "connector": {
> > "state": "RUNNING",
> > "worker_id": "10.0.0.4:*8078*"
> >   },
> >   "tasks": [
> > {
> >   "id": 0,
> >   "state": "RUNNING",
> >   "worker_id": "10.0.0.4:*8078*"
> > },
> > {
> >   

Re: Kafka Connect Connector Tasks Uneven Division

2020-05-20 Thread Deepak Raghav
Hi Robin

I had gone though the link you provided, It is not helpful in my case.
Apart from this, *I am not getting why the tasks are divided in *below
pattern* when they are *first time registered*, which is expected behavior.
I*s there any parameter which we can pass in worker property file which
handle the task assignment strategy like we have range assigner or round
robin in consumer-group ?

connector rest status api result after first registration :

{
  "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
  "connector": {
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*"
  },
  "tasks": [
{
  "id": 0,
  "state": "RUNNING",
  "worker_id": "10.0.0.4:*8078*"
},
{
  "id": 1,
  "state": "RUNNING",
  "worker_id": "10.0.0.5:*8080*"
}
  ],
  "type": "sink"
}

and

{
  "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
  "connector": {
"state": "RUNNING",
"worker_id": "10.0.0.4:*8078*"
  },
  "tasks": [
{
  "id": 0,
  "state": "RUNNING",
  "worker_id": "10.0.0.4:*8078*"
},
{
  "id": 1,
  "state": "RUNNING",
  "worker_id": "10.0.0.5:*8080*"
}
  ],
  "type": "sink"
}


But when I stop the second worker process and wait for
scheduled.rebalance.max.delay.ms time i.e 5 min to over, and start the
process again. Result is different.

{
  "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
  "connector": {
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*"
  },
  "tasks": [
{
  "id": 0,
  "state": "RUNNING",
  "worker_id": "10.0.0.5:*8080*"
},
{
  "id": 1,
  "state": "RUNNING",
  "worker_id": "10.0.0.5:*8080*"
}
  ],
  "type": "sink"
}

and

{
  "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
  "connector": {
"state": "RUNNING",
"worker_id": "10.0.0.4:*8078*"
  },
  "tasks": [
{
  "id": 0,
  "state": "RUNNING",
  "worker_id": "10.0.0.4:*8078*"
},
{
  "id": 1,
  "state": "RUNNING",
  "worker_id": "10.0.0.4:*8078*"
}
  ],
  "type": "sink"
}

Regards and Thanks
Deepak Raghav



On Wed, May 20, 2020 at 9:29 PM Robin Moffatt  wrote:

> Thanks for the clarification. If this is an actual problem that you're
> encountering and need a solution to then since the task allocation is not
> deterministic it sounds like you need to deploy separate worker clusters
> based on the workload patterns that you are seeing and machine resources
> available.
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Wed, 20 May 2020 at 16:39, Deepak Raghav 
> wrote:
>
> > Hi Robin
> >
> > Replying to your query i.e
> >
> > One thing I'd ask at this point is though if it makes any difference
> where
> > the tasks execute?
> >
> > It actually makes difference to us, we have 16 connectors and as I stated
> > tasks division earlier, first 8 connector' task are assigned to first
> > worker process and another connector's task to another worker process and
> > just to mention that these 16 connectors are sink connectors. Each sink
> > connector consumes message from different topic.There may be a case when
> > messages are coming only for first 8 connector's topic and because all
> the
> > tasks of these connectors are assigned to First Worker, load would be
> high
> > on it and another set of connectors in another worker would be idle.
> >
> > Instead, if the task would have been divided evenly then this case would
> > have been avoided. Because tasks of each connector would be present in
> both
> > workers process like below :
> >
> > *W1*   *W2*
> >  C1T1C1T2
> >  C2T2C2T2
> >
> > I hope, I gave your answer,
> >
> >
> > Regards and Thanks
> > Deepak Raghav
> >
> >
> >
> > On Wed, May 20, 2020 at 4:42 PM Robin Moffatt 
> wrote:
> >
> > > OK, I understand better now.
> > >
> &

Re: Kafka Connect Connector Tasks Uneven Division

2020-05-20 Thread Deepak Raghav
Hi Robin

Replying to your query i.e

One thing I'd ask at this point is though if it makes any difference where
the tasks execute?

It actually makes difference to us, we have 16 connectors and as I stated
tasks division earlier, first 8 connector' task are assigned to first
worker process and another connector's task to another worker process and
just to mention that these 16 connectors are sink connectors. Each sink
connector consumes message from different topic.There may be a case when
messages are coming only for first 8 connector's topic and because all the
tasks of these connectors are assigned to First Worker, load would be high
on it and another set of connectors in another worker would be idle.

Instead, if the task would have been divided evenly then this case would
have been avoided. Because tasks of each connector would be present in both
workers process like below :

*W1*   *W2*
 C1T1C1T2
 C2T2C2T2

I hope, I gave your answer,


Regards and Thanks
Deepak Raghav



On Wed, May 20, 2020 at 4:42 PM Robin Moffatt  wrote:

> OK, I understand better now.
>
> You can read more about the guts of the rebalancing protocol that Kafka
> Connect uses as of Apache Kafka 2.3 an onwards here:
> https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/
>
> One thing I'd ask at this point is though if it makes any difference where
> the tasks execute? The point of a cluster is that Kafka Connect manages the
> workload allocation. If you need workload separation and
> guaranteed execution locality I would suggest separate Kafka Connect
> distributed clusters.
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Wed, 20 May 2020 at 10:24, Deepak Raghav 
> wrote:
>
> > Hi Robin
> >
> > Thanks for your reply.
> >
> > We are having two worker on different IP. The example which I gave you it
> > was just a example. We are using kafka version 2.3.1.
> >
> > Let me tell you again with a simple example.
> >
> > Suppose, we have two EC2 node, N1 and N2 having worker process W1 and W2
> > running in distribute mode with groupId i.e in same cluster and two
> > connectors with having two task each i.e
> >
> > Node N1: W1 is running
> > Node N2 : W2 is running
> >
> > First Connector (C1) : Task1 with id : C1T1 and task 2 with id : C1T2
> > Second Connector (C2) : Task1 with id : C2T1 and task 2 with id : C2T2
> >
> > Now Suppose If both W1 and W2 worker process are running  and I register
> > Connector C1 and C2 one after another i.e sequentially, on any of the
> > worker process, the tasks division between the worker
> > node are happening like below, which is expected.
> >
> > *W1*   *W2*
> > C1T1C1T2
> > C2T2C2T2
> >
> > Now, suppose I stop one worker process e.g W2 and start after some time,
> > the tasks division is changed like below i.e first connector's task move
> to
> > W1 and second connector's task move to W2
> >
> > *W1*   *W2*
> > C1T1C2T1
> > C1T2C2T2
> >
> >
> > Please let me know, If it is understandable or not.
> >
> > Note : Actually, In production, we are gonna have 16 connectors having 10
> > task each and two worker node. With above scenario, first 8 connectors's
> > task move to W1 and next 8 connector' task move to W2, Which is not
> > expected.
> >
> >
> > Regards and Thanks
> > Deepak Raghav
> >
> >
> >
> > On Wed, May 20, 2020 at 1:41 PM Robin Moffatt 
> wrote:
> >
> > > So you're running two workers on the same machine (10.0.0.4), is
> > > that correct? Normally you'd run one worker per machine unless there
> was
> > a
> > > particular reason otherwise.
> > > What version of Apache Kafka are you using?
> > > I'm not clear from your question if the distribution of tasks is
> > > presenting a problem to you (if so please describe why), or if you're
> > just
> > > interested in the theory behind the rebalancing protocol?
> > >
> > >
> > > --
> > >
> > > Robin Moffatt | Senior Developer Advocate | ro...@confluent.io |
> @rmoff
> > >
> > >
> > > On Wed, 20 May 2020 at 06:46, Deepak Raghav 
> > > wrote:
> > >
> > > > Hi
> > > >
> > > > Please, can anybody help me with this?
> > > >
> > > > Regar

Re: Kafka Connect Connector Tasks Uneven Division

2020-05-20 Thread Deepak Raghav
Hi Robin

Thanks for your reply.

We are having two worker on different IP. The example which I gave you it
was just a example. We are using kafka version 2.3.1.

Let me tell you again with a simple example.

Suppose, we have two EC2 node, N1 and N2 having worker process W1 and W2
running in distribute mode with groupId i.e in same cluster and two
connectors with having two task each i.e

Node N1: W1 is running
Node N2 : W2 is running

First Connector (C1) : Task1 with id : C1T1 and task 2 with id : C1T2
Second Connector (C2) : Task1 with id : C2T1 and task 2 with id : C2T2

Now Suppose If both W1 and W2 worker process are running  and I register
Connector C1 and C2 one after another i.e sequentially, on any of the
worker process, the tasks division between the worker
node are happening like below, which is expected.

*W1*   *W2*
C1T1C1T2
C2T2C2T2

Now, suppose I stop one worker process e.g W2 and start after some time,
the tasks division is changed like below i.e first connector's task move to
W1 and second connector's task move to W2

*W1*   *W2*
C1T1C2T1
C1T2C2T2


Please let me know, If it is understandable or not.

Note : Actually, In production, we are gonna have 16 connectors having 10
task each and two worker node. With above scenario, first 8 connectors's
task move to W1 and next 8 connector' task move to W2, Which is not
expected.


Regards and Thanks
Deepak Raghav



On Wed, May 20, 2020 at 1:41 PM Robin Moffatt  wrote:

> So you're running two workers on the same machine (10.0.0.4), is
> that correct? Normally you'd run one worker per machine unless there was a
> particular reason otherwise.
> What version of Apache Kafka are you using?
> I'm not clear from your question if the distribution of tasks is
> presenting a problem to you (if so please describe why), or if you're just
> interested in the theory behind the rebalancing protocol?
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Wed, 20 May 2020 at 06:46, Deepak Raghav 
> wrote:
>
> > Hi
> >
> > Please, can anybody help me with this?
> >
> > Regards and Thanks
> > Deepak Raghav
> >
> >
> >
> > On Tue, May 19, 2020 at 1:37 PM Deepak Raghav 
> > wrote:
> >
> > > Hi Team
> > >
> > > We have two worker node in a cluster and 2 connector with having 10
> tasks
> > > each.
> > >
> > > Now, suppose if we have two kafka connect process W1(Port 8080) and
> > > W2(Port 8078) started already in distribute mode and then register the
> > > connectors, task of one connector i.e 10 tasks are divided equally
> > between
> > > two worker i.e first task of A connector to W1 worker node and sec task
> > of
> > > A connector to W2 worker node, similarly for first task of B connector,
> > > will go to W1 node and sec task of B connector go to W2 node.
> > >
> > > e.g
> > > *#First Connector : *
> > > {
> > >   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
> > >   "connector": {
> > > "state": "RUNNING",
> > > "worker_id": "10.0.0.4:*8080*"
> > >   },
> > >   "tasks": [
> > > {
> > >   "id": 0,
> > >   "state": "RUNNING",
> > >   "worker_id": "10.0.0.4:*8078*"
> > > },
> > > {
> > >   "id": 1,
> > >   "state": "RUNNING",
> > >   "worker_id": "10.0.0.4:8080"
> > > },
> > > {
> > >   "id": 2,
> > >   "state": "RUNNING",
> > >   "worker_id": "10.0.0.4:8078"
> > > },
> > > {
> > >   "id": 3,
> > >   "state": "RUNNING",
> > >   "worker_id": "10.0.0.4:8080"
> > > },
> > > {
> > >   "id": 4,
> > >   "state": "RUNNING",
> > >   "worker_id": "10.0.0.4:8078"
> > > },
> > > {
> > >   "id": 5,
> > >   "state": "RUNNING",
> > >   "worker_id": "10.0.0.4:8080"
> > > },
> > > {
> > >   "id": 6,
> > >   "state": "RUNNING&

Re: Kafka Connect Connector Tasks Uneven Division

2020-05-19 Thread Deepak Raghav
Hi

Please, can anybody help me with this?

Regards and Thanks
Deepak Raghav



On Tue, May 19, 2020 at 1:37 PM Deepak Raghav 
wrote:

> Hi Team
>
> We have two worker node in a cluster and 2 connector with having 10 tasks
> each.
>
> Now, suppose if we have two kafka connect process W1(Port 8080) and
> W2(Port 8078) started already in distribute mode and then register the
> connectors, task of one connector i.e 10 tasks are divided equally between
> two worker i.e first task of A connector to W1 worker node and sec task of
> A connector to W2 worker node, similarly for first task of B connector,
> will go to W1 node and sec task of B connector go to W2 node.
>
> e.g
> *#First Connector : *
> {
>   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
>   "connector": {
> "state": "RUNNING",
> "worker_id": "10.0.0.4:*8080*"
>   },
>   "tasks": [
> {
>   "id": 0,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:*8078*"
> },
> {
>   "id": 1,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 2,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 3,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 4,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 5,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 6,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 7,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 8,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 9,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> }
>   ],
>   "type": "sink"
> }
>
>
> *#Sec connector*
>
> {
>   "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
>   "connector": {
> "state": "RUNNING",
> "worker_id": "10.0.0.4:8078"
>   },
>   "tasks": [
> {
>   "id": 0,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 1,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 2,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 3,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 4,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 5,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 6,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 7,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> },
> {
>   "id": 8,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8078"
> },
> {
>   "id": 9,
>   "state": "RUNNING",
>   "worker_id": "10.0.0.4:8080"
> }
>   ],
>   "type": "sink"
> }
>
> But I have seen a strange behavior, when I just shutdown W2 worker node
> and start it again task are divided but in diff way i.e all the tasks of A
> connector will get into W1 node and tasks of B Connector into W2 node.
>
> Can you please have a look for this.
>
>

optimal message size in Kafka

2019-05-14 Thread Raghav
Hi

Is there any study that shows why smaller size messages are optimal for
Kafka, and as size increases to 1MB and more, the throughput decreases ?
What are the design choices in Kafka that leads to this behavior.

Can any developer or committer share any insight  and point to the relevant
piece of code ?

Thanks

R


Re: KIP-272: Add API version tag to broker's RequestsPerSec metric

2019-03-13 Thread Raghav
Hi

The above KIP broke our graphs when we moved from 1.1 to 2.1. I can see
that this has been mentioned in the Release Notes. We were using Java
client to aggregate metrics using mbean, but now the same code cannot work
even after we provide the version string as mentioned here:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric

Can someone please help with some sample Java code snippet that actually
shows how to get this metric and aggregate it over all the version.

I really hope that someone can help here to help us overcome our broken
graphs.

Thanks. Regards.

-- 
Raghav


Re: Kafka 2.1.0 JMX mbean broken ?

2019-03-12 Thread Raghav
Hi ManiKumar

Can you suggest any Java api snippet code that does this ? I tried
something like this but could not get it to work.  I am seeking help only
after trying everything that I could. Breaking backward compatibility is
tough for many customers like us, unless there is a better and a new way to
make it work seamlessly. Any help is greatly appreciated.

boolean processMetrics(String name, String mBeamName, String metricName) {
if (!isInitialized) {
  initializeMBeam();
}
try {
  ObjectName bean = new ObjectName(mBeamName);
  MBeanInfo info = remote.getMBeanInfo(bean);
  MBeanAttributeInfo[] attributes = info.getAttributes();
  log.info("Attribute length " + attributes.length);
  for (MBeanAttributeInfo attr : attributes) {
try {
  if (metricName.equals(attr.getName())) {
log.info(attr.getName() + ", " + remote.getAttribute(bean,
attr.getName()));
  }
} catch (Exception e) {
  log.error("Failed to get metric info");
  continue;
}
  }
} catch (Exception e) {
  log.error("Failed to get metric info");
}
return true;
  }
}

Thanks.

R

On Fri, Mar 8, 2019 at 10:41 PM Manikumar  wrote:

> Hi Raghav,
>
> As you know, KIP-372 added "version" tag to RequestsPerSec metric to
> monitor requests for each version.
> As mentioned in the KIP, to get total count per request (across all
> versions), we need to aggregate over all versions.
> You may need to automate this by fetching all metrics using wildcard and
> aggregating over all versions.
> We may need to use JMX readers which support wildcard queries.
>
> All possible protocol request versions are available here:
> https://kafka.apache.org/protocol.html#The_Messages_Produce
>
>
> On Sat, Mar 9, 2019 at 5:59 AM Raghav  wrote:
>
>> Hello Allen, Ted, Mani, Gwen, James
>>
>> https://www.mail-archive.com/dev@kafka.apache.org/msg86226.html
>>
>> I read this email thread and I see that you guys were split in going
>> ahead with this change. This has broken our dashboard in 2.1 Kafka, which
>> is ok as long as we know how to fix it.
>>
>> Can you please help us figure out the answer for the email below ? It
>> will be greatly appreciated. We just want to know how to find the version
>> number ?
>>
>> Many thanks.
>>
>> R
>>
>> On Fri, Dec 14, 2018 at 5:16 PM Raghav  wrote:
>>
>>> I got it to work. I fired up a console and then saw what beans are
>>> registered there and then queries using the code. It works then.
>>>
>>> But the apiVersion is different and how do we know what apiVersion a
>>> producer is using ? Our dashboards cannot have a-priori knowledge of the
>>> version numbers, and wildcards don't work. See the screenshot below,
>>> apiVersion is 7. Where did this come from ? Can someone please help to
>>> understand.
>>>
>>> [image: jmx.png]
>>>
>>>
>>>
>>> On Fri, Dec 14, 2018 at 4:29 PM Raghav  wrote:
>>>
>>>> Is this a test case for this commit:
>>>> https://github.com/apache/kafka/pull/4506 ? I have tried using all
>>>> possible cases but cannot get it to work.
>>>>
>>>> I cannot get it to work using mbean reader via JMX. Can any one please
>>>> help? Or atleast confirm that it is broken. Thanks
>>>>
>>>> R
>>>>
>>>> On Fri, Dec 14, 2018 at 6:34 AM Raghav  wrote:
>>>>
>>>>> Thanks Ismael. How to query it in 2.1 ? I tried all possible ways
>>>>> including using version, but I am still getting the same exception 
>>>>> message.
>>>>>
>>>>> Thanks for your help.
>>>>>
>>>>> On Thu, Dec 13, 2018 at 7:19 PM Ismael Juma  wrote:
>>>>>
>>>>>> The metric was changed to include the API version. I believe this was
>>>>>> in
>>>>>> the upgrade notes for 2.0.0.
>>>>>>
>>>>>> Ismael
>>>>>>
>>>>>> On Thu, Dec 13, 2018, 3:35 PM Raghav >>>>>
>>>>>> > Hi
>>>>>> >
>>>>>> > We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to
>>>>>> monitor
>>>>>> > our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed
>>>>>> that the
>>>>>> > *below* mentioned metric can't be retrie
>>>>>> > and we get the below exceptio

Re: Kafka 2.1.0 JMX mbean broken ?

2019-03-08 Thread Raghav
Hello Allen, Ted, Mani, Gwen, James

https://www.mail-archive.com/dev@kafka.apache.org/msg86226.html

I read this email thread and I see that you guys were split in going ahead
with this change. This has broken our dashboard in 2.1 Kafka, which is ok
as long as we know how to fix it.

Can you please help us figure out the answer for the email below ? It will
be greatly appreciated. We just want to know how to find the version number
?

Many thanks.

R

On Fri, Dec 14, 2018 at 5:16 PM Raghav  wrote:

> I got it to work. I fired up a console and then saw what beans are
> registered there and then queries using the code. It works then.
>
> But the apiVersion is different and how do we know what apiVersion a
> producer is using ? Our dashboards cannot have a-priori knowledge of the
> version numbers, and wildcards don't work. See the screenshot below,
> apiVersion is 7. Where did this come from ? Can someone please help to
> understand.
>
> [image: jmx.png]
>
>
>
> On Fri, Dec 14, 2018 at 4:29 PM Raghav  wrote:
>
>> Is this a test case for this commit:
>> https://github.com/apache/kafka/pull/4506 ? I have tried using all
>> possible cases but cannot get it to work.
>>
>> I cannot get it to work using mbean reader via JMX. Can any one please
>> help? Or atleast confirm that it is broken. Thanks
>>
>> R
>>
>> On Fri, Dec 14, 2018 at 6:34 AM Raghav  wrote:
>>
>>> Thanks Ismael. How to query it in 2.1 ? I tried all possible ways
>>> including using version, but I am still getting the same exception message.
>>>
>>> Thanks for your help.
>>>
>>> On Thu, Dec 13, 2018 at 7:19 PM Ismael Juma  wrote:
>>>
>>>> The metric was changed to include the API version. I believe this was in
>>>> the upgrade notes for 2.0.0.
>>>>
>>>> Ismael
>>>>
>>>> On Thu, Dec 13, 2018, 3:35 PM Raghav >>>
>>>> > Hi
>>>> >
>>>> > We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to
>>>> monitor
>>>> > our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed
>>>> that the
>>>> > *below* mentioned metric can't be retrie
>>>> > and we get the below exception:
>>>> >
>>>> >
>>>> *"kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"*
>>>> >
>>>> > javax.management.InstanceNotFoundException:
>>>> > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce
>>>> > at
>>>> >
>>>> >
>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>>>> > at
>>>> >
>>>> >
>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1375)
>>>> > at
>>>> >
>>>> >
>>>> com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
>>>> > at
>>>> >
>>>> >
>>>> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1462)
>>>> > at
>>>> >
>>>> >
>>>> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>>>> > at
>>>> >
>>>> >
>>>> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>>>> > at
>>>> >
>>>> >
>>>> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>>>> > at
>>>> >
>>>> >
>>>> javax.management.remote.rmi.RMIConnectionImpl.getMBeanInfo(RMIConnectionImpl.java:905)
>>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> > at
>>>> >
>>>> >
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>> > at
>>>> >
>>>> >
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> > at java.lang.reflect.Method.invoke(Method.java:498)
>>>> > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>>>> > at sun.rmi.transport.Transport$1.run(Transport.java:200)
>>>> > at sun.rmi.transport.Transport$1.run(Transport.java:197)
>>>> > at java.secur

Re: Kafka 2.1.0 JMX mbean broken ?

2018-12-14 Thread Raghav
I got it to work. I fired up a console and then saw what beans are
registered there and then queries using the code. It works then.

But the apiVersion is different and how do we know what apiVersion a
producer is using ? Our dashboards cannot have a-priori knowledge of the
version numbers, and wildcards don't work. See the screenshot below,
apiVersion is 7. Where did this come from ? Can someone please help to
understand.

[image: jmx.png]



On Fri, Dec 14, 2018 at 4:29 PM Raghav  wrote:

> Is this a test case for this commit:
> https://github.com/apache/kafka/pull/4506 ? I have tried using all
> possible cases but cannot get it to work.
>
> I cannot get it to work using mbean reader via JMX. Can any one please
> help? Or atleast confirm that it is broken. Thanks
>
> R
>
> On Fri, Dec 14, 2018 at 6:34 AM Raghav  wrote:
>
>> Thanks Ismael. How to query it in 2.1 ? I tried all possible ways
>> including using version, but I am still getting the same exception message.
>>
>> Thanks for your help.
>>
>> On Thu, Dec 13, 2018 at 7:19 PM Ismael Juma  wrote:
>>
>>> The metric was changed to include the API version. I believe this was in
>>> the upgrade notes for 2.0.0.
>>>
>>> Ismael
>>>
>>> On Thu, Dec 13, 2018, 3:35 PM Raghav >>
>>> > Hi
>>> >
>>> > We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to
>>> monitor
>>> > our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed
>>> that the
>>> > *below* mentioned metric can't be retrie
>>> > and we get the below exception:
>>> >
>>> >
>>> *"kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"*
>>> >
>>> > javax.management.InstanceNotFoundException:
>>> > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce
>>> > at
>>> >
>>> >
>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>>> > at
>>> >
>>> >
>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1375)
>>> > at
>>> >
>>> >
>>> com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
>>> > at
>>> >
>>> >
>>> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1462)
>>> > at
>>> >
>>> >
>>> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>>> > at
>>> >
>>> >
>>> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>>> > at
>>> >
>>> >
>>> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>>> > at
>>> >
>>> >
>>> javax.management.remote.rmi.RMIConnectionImpl.getMBeanInfo(RMIConnectionImpl.java:905)
>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> > at
>>> >
>>> >
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> > at
>>> >
>>> >
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> > at java.lang.reflect.Method.invoke(Method.java:498)
>>> > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>>> > at sun.rmi.transport.Transport$1.run(Transport.java:200)
>>> > at sun.rmi.transport.Transport$1.run(Transport.java:197)
>>> > at java.security.AccessController.doPrivileged(Native Method)
>>> > at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>>> > at
>>> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>>> > at
>>> >
>>> >
>>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>>> > at
>>> >
>>> >
>>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>>> > at java.security.AccessController.doPrivileged(Native Method)
>>> > at
>>> >
>>> >
>>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>>> > at
>>> >
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> > at
>>> >
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> > at java.lang.Thread.run(Thread.java:745)
>>> > at
>>> >
>>> >
>>> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:276)
>>> > at
>>> >
>>> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:253)
>>> > at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
>>> > at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>>> > at
>>> javax.management.remote.rmi.RMIConnectionImpl_Stub.getMBeanInfo(Unknown
>>> > Source)
>>> > at
>>> >
>>> >
>>> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getMBeanInfo(RMIConnector.java:1079)
>>> >
>>> >
>>> > We even tried adding version to it, but no avail.
>>> >
>>> > Can someone please guide us how to
>>> > get
>>> "kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"
>>> > value ?
>>> >
>>> > R
>>> >
>>>
>>
>>
>> --
>> Raghav
>>
>
>
> --
> Raghav
>


-- 
Raghav


Re: Kafka 2.1.0 JMX mbean broken ?

2018-12-14 Thread Raghav
Is this a test case for this commit:
https://github.com/apache/kafka/pull/4506 ? I have tried using all possible
cases but cannot get it to work.

I cannot get it to work using mbean reader via JMX. Can any one please
help? Or atleast confirm that it is broken. Thanks

R

On Fri, Dec 14, 2018 at 6:34 AM Raghav  wrote:

> Thanks Ismael. How to query it in 2.1 ? I tried all possible ways
> including using version, but I am still getting the same exception message.
>
> Thanks for your help.
>
> On Thu, Dec 13, 2018 at 7:19 PM Ismael Juma  wrote:
>
>> The metric was changed to include the API version. I believe this was in
>> the upgrade notes for 2.0.0.
>>
>> Ismael
>>
>> On Thu, Dec 13, 2018, 3:35 PM Raghav >
>> > Hi
>> >
>> > We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to
>> monitor
>> > our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed that
>> the
>> > *below* mentioned metric can't be retrie
>> > and we get the below exception:
>> >
>> >
>> *"kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"*
>> >
>> > javax.management.InstanceNotFoundException:
>> > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce
>> > at
>> >
>> >
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>> > at
>> >
>> >
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1375)
>> > at
>> >
>> >
>> com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
>> > at
>> >
>> >
>> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1462)
>> > at
>> >
>> >
>> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>> > at
>> >
>> >
>> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>> > at
>> >
>> >
>> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>> > at
>> >
>> >
>> javax.management.remote.rmi.RMIConnectionImpl.getMBeanInfo(RMIConnectionImpl.java:905)
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.lang.reflect.Method.invoke(Method.java:498)
>> > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>> > at sun.rmi.transport.Transport$1.run(Transport.java:200)
>> > at sun.rmi.transport.Transport$1.run(Transport.java:197)
>> > at java.security.AccessController.doPrivileged(Native Method)
>> > at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>> > at
>> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>> > at
>> >
>> >
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>> > at
>> >
>> >
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>> > at java.security.AccessController.doPrivileged(Native Method)
>> > at
>> >
>> >
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> > at java.lang.Thread.run(Thread.java:745)
>> > at
>> >
>> >
>> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:276)
>> > at
>> >
>> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:253)
>> > at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
>> > at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>> > at
>> javax.management.remote.rmi.RMIConnectionImpl_Stub.getMBeanInfo(Unknown
>> > Source)
>> > at
>> >
>> >
>> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getMBeanInfo(RMIConnector.java:1079)
>> >
>> >
>> > We even tried adding version to it, but no avail.
>> >
>> > Can someone please guide us how to
>> > get
>> "kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"
>> > value ?
>> >
>> > R
>> >
>>
>
>
> --
> Raghav
>


-- 
Raghav


Re: Kafka 2.1.0 JMX mbean broken ?

2018-12-14 Thread Raghav
Thanks Ismael. How to query it in 2.1 ? I tried all possible ways including
using version, but I am still getting the same exception message.

Thanks for your help.

On Thu, Dec 13, 2018 at 7:19 PM Ismael Juma  wrote:

> The metric was changed to include the API version. I believe this was in
> the upgrade notes for 2.0.0.
>
> Ismael
>
> On Thu, Dec 13, 2018, 3:35 PM Raghav 
> > Hi
> >
> > We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to monitor
> > our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed that
> the
> > *below* mentioned metric can't be retrie
> > and we get the below exception:
> >
> > *"kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"*
> >
> > javax.management.InstanceNotFoundException:
> > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce
> > at
> >
> >
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
> > at
> >
> >
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1375)
> > at
> >
> >
> com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
> > at
> >
> >
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1462)
> > at
> >
> >
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> > at
> >
> >
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> > at
> >
> >
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> > at
> >
> >
> javax.management.remote.rmi.RMIConnectionImpl.getMBeanInfo(RMIConnectionImpl.java:905)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> > at sun.rmi.transport.Transport$1.run(Transport.java:200)
> > at sun.rmi.transport.Transport$1.run(Transport.java:197)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> > at
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> > at
> >
> >
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> > at
> >
> >
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at
> >
> >
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> > at
> >
> >
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:276)
> > at
> > sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:253)
> > at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
> > at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
> > at
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getMBeanInfo(Unknown
> > Source)
> > at
> >
> >
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getMBeanInfo(RMIConnector.java:1079)
> >
> >
> > We even tried adding version to it, but no avail.
> >
> > Can someone please guide us how to
> > get
> "kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"
> > value ?
> >
> > R
> >
>


-- 
Raghav


Kafka 2.1.0 JMX mbean broken ?

2018-12-13 Thread Raghav
Hi

We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to monitor
our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed that the
*below* mentioned metric can't be retrie
and we get the below exception:

*"kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"*

javax.management.InstanceNotFoundException:
kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1375)
at
com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
at
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1462)
at
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at
javax.management.remote.rmi.RMIConnectionImpl.getMBeanInfo(RMIConnectionImpl.java:905)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:276)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:253)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at javax.management.remote.rmi.RMIConnectionImpl_Stub.getMBeanInfo(Unknown
Source)
at
javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getMBeanInfo(RMIConnector.java:1079)


We even tried adding version to it, but no avail.

Can someone please guide us how to
get "kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce"
value ?

R


offsets.topic.replication.factor vs default.replication.factor

2018-10-18 Thread Raghav
Hi

We have a 3 node Kafka Brokers setup.

Our current value of default.replication.factor is 2.

What should be the recommended value of offsets.topic.replication.factor ?

Please advise as we are not completely sure
about offsets.topic.replication.factor ?

Thanks for your help.

R


Re: Help understand error message - Remote broker is not the leader for partition

2018-10-10 Thread Raghav
Anyone ? We really hit the wall deciphering this error log, and we don't
know how to fix it.

On Wed, Oct 10, 2018 at 12:52 PM Raghav  wrote:

> Hi
>
> We are on Kafka 1.1 and have 3 Kafka brokers, and help your need to
> understand the error message, and what it would take to fix the problem.
>
> On Broker-1 we see the following logs for several and *some producers
> fails to write to Kafka*:
>
> [2018-10-08 12:28:25,609] INFO [ReplicaFetcher replicaId=1, leaderId=3,
> fetcherId=0] Remote broker is not the leader for partition
> __consumer_offsets-44, which could indicate that the partition is being
> moved (kafka.server.ReplicaFetcherThread)
> [2018-10-08 12:28:25,609] INFO [ReplicaFetcher replicaId=1, leaderId=3,
> fetcherId=0] Remote broker is not the leader for partition
> __consumer_offsets-20, which could indicate that the partition is being
> moved (kafka.server.ReplicaFetcherThread)
> [2018-10-08 12:28:25,609] INFO [ReplicaFetcher replicaId=1, leaderId=3,
> fetcherId=0] Remote broker is not the leader for partition
> __consumer_offsets-2, which could indicate that the partition is being
> moved (kafka.server.ReplicaFetcherThread)
>
> Here is server.properties on Broker-1
>
> *# The id of the broker. This must be set to a unique integer for each
> broker.
>   broker.id <http://broker.id>=1*
> *auto.create.topics.enable=true
>
>delete.topic.enable=true*
> *
>
>   SHUTDOWN and REBALANCING
> ###
>   # Both the
> following properties are also enabled by default as well, also explicitly
> settings here*
> *controlled.shutdown.enable=true
>
>   auto.leader.rebalance.enable=true*
> *unclean.leader.election.enable=true*
>
>
> --
> Raghav
>


-- 
Raghav


Help understand error message - Remote broker is not the leader for partition

2018-10-10 Thread Raghav
Hi

We are on Kafka 1.1 and have 3 Kafka brokers, and help your need to
understand the error message, and what it would take to fix the problem.

On Broker-1 we see the following logs for several and *some producers fails
to write to Kafka*:

[2018-10-08 12:28:25,609] INFO [ReplicaFetcher replicaId=1, leaderId=3,
fetcherId=0] Remote broker is not the leader for partition
__consumer_offsets-44, which could indicate that the partition is being
moved (kafka.server.ReplicaFetcherThread)
[2018-10-08 12:28:25,609] INFO [ReplicaFetcher replicaId=1, leaderId=3,
fetcherId=0] Remote broker is not the leader for partition
__consumer_offsets-20, which could indicate that the partition is being
moved (kafka.server.ReplicaFetcherThread)
[2018-10-08 12:28:25,609] INFO [ReplicaFetcher replicaId=1, leaderId=3,
fetcherId=0] Remote broker is not the leader for partition
__consumer_offsets-2, which could indicate that the partition is being
moved (kafka.server.ReplicaFetcherThread)

Here is server.properties on Broker-1

*# The id of the broker. This must be set to a unique integer for each
broker.
  broker.id <http://broker.id>=1*
*auto.create.topics.enable=true

 delete.topic.enable=true*
*

  SHUTDOWN and REBALANCING
###
  # Both the
following properties are also enabled by default as well, also explicitly
settings here*
*controlled.shutdown.enable=true

  auto.leader.rebalance.enable=true*
*unclean.leader.election.enable=true*


-- 
Raghav


Please help: Zookeeper not coming up after power down

2018-08-16 Thread Raghav
Hi

Our 3 node Zookeeper ensemble got powered down, and upon powering up the
zookeeper could get quorum and kept throwing these errors. As a result our
Kafka cluster was unusable. What is the best way to revive ZK cluster in
such situations ? Please suggest.


2018-08-17_00:59:18.87009 2018-08-17 00:59:18,869 [myid:1] - WARN
 [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot
open channel to 2 at election address /1.1.1.143:3888
2018-08-17_00:59:18.87011 java.net.ConnectException: Connection refused
2018-08-17_00:59:18.87011   at
java.net.PlainSocketImpl.socketConnect(Native Method)
2018-08-17_00:59:18.87011   at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
2018-08-17_00:59:18.87012   at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
2018-08-17_00:59:18.87012   at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
2018-08-17_00:59:18.87013   at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
2018-08-17_00:59:18.87013   at java.net.Socket.connect(Socket.java:589)
2018-08-17_00:59:18.87013   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
2018-08-17_00:59:18.87014   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
2018-08-17_00:59:18.87014   at
org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
2018-08-17_00:59:18.87014   at
org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:957)
2018-08-17_00:59:18.87034 2018-08-17 00:59:18,870 [myid:1] - INFO
 [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@184] -
Resolved hostname: 1.1.1.143 to address: /1.1.1.143
2018-08-17_00:59:18.87095 2018-08-17 00:59:18,870 [myid:1] - WARN
 [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot
open channel to 3 at election address /1.1.1.144:3888
2018-08-17_00:59:18.87097 java.net.ConnectException: Connection refused
2018-08-17_00:59:18.87097   at
java.net.PlainSocketImpl.socketConnect(Native Method)
2018-08-17_00:59:18.87097   at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
2018-08-17_00:59:18.87098   at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
2018-08-17_00:59:18.87098   at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
2018-08-17_00:59:18.87098   at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
2018-08-17_00:59:18.87098   at java.net.Socket.connect(Socket.java:589)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
2018-08-17_00:59:18.87099   at
org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:957)

Thanks.

R


Re: Java API to read metrics via JMX

2018-08-09 Thread Raghav
Hi

I found
https://github.com/kafka-dev/kafka/blob/master/perf/src/main/java/kafka/perf/jmx/BrokerJmxClient.java
code
written by Neha to pull JMX metrics via MBean.

In here:
https://github.com/kafka-dev/kafka/blob/master/perf/src/main/java/kafka/perf/jmx/BrokerJmxClient.java#L37
there
is a mention of object name (kafka:type=kafka.SocketServerStats)  and
subsequently SocketServerStatsMBean.class.

My question is how to get these two for the latest Kafka 1.1.

I want to write code in Java to get Kafka Stats exposed via JMX and then
want to write to DB that our UI can read.

Thanks.

R

On Wed, Aug 8, 2018 at 6:46 PM, Raghav  wrote:

> Hi
>
> Is there any Java API available so that I can enable our Kafka cluster's
> JMX port, and consume metrics via JMX api, and dump to a time series
> database.
>
> I checked out jmxtrans, but currently it does not dump to TSDB (time
> series database).
>
> Thanks.
>
> R
>



-- 
Raghav


Re: Java API to read metrics via JMX

2018-08-08 Thread Raghav
Thanks Boris. Is there a sample implementation in Java ?

thanks.

R

On Wed, Aug 8, 2018 at 7:10 PM, Boris Lublinsky <
boris.lublin...@lightbend.com> wrote:

> Its actually quite simple, unfortunately you have to read, and then write
> to TSDB.
> Enclosed is an example doing this and dumping to InfluxDB
>
>
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
> On Aug 8, 2018, at 8:46 PM, Raghav  wrote:
>
> Hi
>
> Is there any Java API available so that I can enable our Kafka cluster's
> JMX port, and consume metrics via JMX api, and dump to a time series
> database.
>
> I checked out jmxtrans, but currently it does not dump to TSDB (time series
> database).
>
> Thanks.
>
> R
>
>
>
>


-- 
Raghav


Java API to read metrics via JMX

2018-08-08 Thread Raghav
Hi

Is there any Java API available so that I can enable our Kafka cluster's
JMX port, and consume metrics via JMX api, and dump to a time series
database.

I checked out jmxtrans, but currently it does not dump to TSDB (time series
database).

Thanks.

R


Copy of Kafka Cluster in different Data Centers

2018-06-07 Thread Raghav
Hi

We want to have to have two copies of our Kafka cluster (in two different
data centers). In case one DC is unavailable, the Kafka Cluster in other
data center should be able to serve.

1. What are the recommended ways to achieve this ? I am assuming using
mirrormaker, we can achieve this. Any DOs or DON'Ts ?

2. How does ZooKeeper needs to be handled ? Any DOs or DON'Ts for ZK ?

Thanks

R


Re: How to gracefully stop Kafka

2018-06-01 Thread Raghav
Thanks guys, I will try this and update to see if that worked.

On Fri, Jun 1, 2018 at 1:42 AM, M. Manna  wrote:

> Regarding graceful shutdown - I have got a response from Jan in the past -
> I am simply quoting that below:
>
> "A gracefully shutdown means the broker is only shutting down when it is
> not the leader of any partition.
> Therefore you should not be able to gracefully shut down your entire
> cluster."
>
> That said, you should allow some flexibility in your startup. I do my
> testbed (3-node) startup always the following way - and it works nicely
>
> 1) Start each zookeeper node - allow 5 seconds between each startup.
> 2) When all ZKs are up - wait for another 10 seconds
> 3) Start all brokers - allow 5 seconds between each startup
>
> Provided that your index files aren't corrupted - it should always start up
> normally.
>
> Regards,
>
>
>
>
> On 1 June 2018 at 07:37, Pena Quijada Alexander 
> wrote:
>
> > Hi,
> >
> > From my point of view, if you don't have any tool that help you in the
> > management of your broker services, in other to do a rolling restart
> > manually, you should shut down one broker at a time.
> >
> > In this way, you leave time to the broker controller service to balance
> > the active replicas into the healthy nodes.
> >
> > The same procedure when you start up your nodes.
> >
> > Regards!
> >
> > Alex
> >
> > Inviato da BlueMail<http://www.bluemail.me/r?b=13090> Il giorno 1 giu
> > 2018, alle ore 07:31, Raghav  > raghavas...@gmail.com>> ha scritto:
> >
> > Hi
> >
> > We have a 3 Kafka brokers setup on 0.10.2.1. We have a requirement in our
> > company environment that we have to first stop our 3 Kafka Broker setup,
> > then do some operations stuff that takes about 1 hours, and then bring up
> > Kafka (version 1.1) brokers again.
> >
> > In order to achieve this, we issue:
> >
> > 1. Run *bin/<http://kafka-server-stop.sh>kafka-server-stop.sh > //kafka-server-stop.sh>* at the same time on all three brokers.
> > 2. Do operations on our environment for about 1 hour.
> > 3. Run bin/kafka-server.-<http://start.sh>start.sh<http://start.sh> at
> > the same time on all three brokers.
> >
> > Upon start, we observe that leadership for lot of partition is messed up.
> > The leadership shows up as -1 for lot of partitions. And ISR has no
> > servers. Because of this our Kafka cluster is unusable, and even restart
> of
> > brokers doesn't help.
> >
> > 1. Could it be because we are not doing rolling stop ?
> > 2. What's the best way to do rollling stop ?
> >
> > Please advise.
> > Thanks.
> >
> > R
> >
> > 
> >
> > --
> > The information transmitted is intended for the person or entity to which
> > it is addressed and may contain confidential and/or privileged material.
> > Any review, retransmission, dissemination or other use of, or taking of
> any
> > action in reliance upon, this information by persons or entities other
> than
> > the intended recipient is prohibited. If you received this in error,
> please
> > contact the sender and delete the material from any computer.
> >
>



-- 
Raghav


How to gracefully stop Kafka

2018-05-31 Thread Raghav
Hi

We have a 3 Kafka brokers setup on 0.10.2.1. We have a requirement in our
company environment that we have to first stop our 3 Kafka Broker setup,
then do some operations stuff that takes about 1 hours, and then bring up
Kafka (version 1.1) brokers again.

In order to achieve this, we issue:

1. Run *bin/kafka-server-stop.sh* at the same time on all three brokers.
2. Do operations on our environment for about 1 hour.
3. Run bin/kafka-server.-start.sh at the same time on all three brokers.

Upon start, we observe that leadership for lot of partition is messed up.
The leadership shows up as -1 for lot of partitions. And ISR has no
servers. Because of this our Kafka cluster is unusable, and even restart of
brokers doesn't help.

1. Could it be because we are not doing rolling stop ?
2. What's the best way to do rollling stop ?

Please advise.
Thanks.

R


Re: Help Needed: Leadership Issue upon Kafka Upgrade (ZooKeeper 3.4.9)

2018-05-13 Thread Raghav
Inline

On Fri, May 11, 2018 at 4:47 PM, Ted Yu  wrote:

> Was there any previous connectivity issue to 1.1.1.143:3888 before the
> upgrade ?
>
> No. Before the upgrade, everything was working just fine.


> I assume you have verified that connectivity between broker and 1.1.1.143
> <http://1.1.1.143:3888/> is Okay.
>
Yes, the connectivity is just fine. Ping and netcat just worked fine.

>
> Which zookeeper release are you running ?
>
We are on 3.4.9

>
> Cheers
>
>
Ted, Anything else we could be missing or somework around that could help
us overcome this issue ? We can repro it every single time.


Many thanks.


> On Fri, May 11, 2018 at 3:16 PM, Raghav  wrote:
>
> > Hi
> >
> > We have a 3 node zk ensemble as well as 3 node Kafka Cluster. They both
> are
> > hosted on the same 3 VMs.
> >
> > Before Restart
> > 1. We were on Kafka 0.10.2.1
> >
> > After Restart
> > 1. We moved to Kafka 1.1
> >
> > We observe that Kafkas report leadership issues, and for lot of
> partitions
> > Leader is -1. I see some logs in ZK that mainly point towards some
> > connectivity issue around restart time.
> >
> > *We are stuck on this one for a while now, and neither rolling restart of
> > ZK is helping. Can you please help or point us how we can debug this.*
> >
> > *2018-05-11_17:20:49.00305 2018-05-11 17:20:49,002 [myid:1] - INFO
> > [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1
> (message
> > format version), 1 (n.leader), 0x20112 (n.zxid), 0x1 (n.round),
> LOOKING
> > (n.state), 1 (n.sid), 0x2 (n.peerEpoch) LOOKING (my
> > state)2018-05-11_17:20:49.01201
> > 2018-05-11 17:20:49,010 [myid:1] - WARN
> > [WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 2
> at
> > election address /1.1.1.143:3888
> > <http://1.1.1.143:3888>
> > 2018-05-11_17:20:49.01203 java.net.ConnectException: Connection
> > refused
> > 2018-05-11_17:20:49.01203   at
> > java.net.PlainSocketImpl.socketConnect(Native
> > Method)
> > 2018-05-11_17:20:49.01203   at java.net
> > <http://java.net>.AbstractPlainSocketImpl.doConnect(
> > AbstractPlainSocketImpl.java:345)
> > 2018-05-11_17:20:49.01203   at java.net
> > <http://java.net>.AbstractPlainSocketImpl.connectToAddress(
> > AbstractPlainSocketImpl.java:206)
> > 2018-05-11_17:20:49.01204   at java.net
> > <http://java.net>.AbstractPlainSocketImpl.connect(
> > AbstractPlainSocketImpl.java:188)
> > 2018-05-11_17:20:49.01204   at
> > java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> > 2018-05-11_17:20:49.01204   at
> > java.net.Socket.connect(Socket.java:589)
> > 2018-05-11_17:20:49.01204   at
> > org.apache.zookeeper.server.quorum.QuorumCnxManager.
> > connectOne(QuorumCnxManager.java:381)
> > 2018-05-11_17:20:49.01204   at
> > org.apache.zookeeper.server.quorum.QuorumCnxManager.
> > toSend(QuorumCnxManager.java:354)
> > 2018-05-11_17:20:49.01205   at
> > org.apache.zookeeper.server.quorum.FastLeaderElection$
> > Messenger$WorkerSender.process(FastLeaderElection.java:452)
> > 2018-05-11_17:20:49.01205   at
> > org.apache.zookeeper.server.quorum.FastLeaderElection$
> > Messenger$WorkerSender.run(FastLeaderElection.java:433)
> > 2018-05-11_17:20:49.01206   at java.lang.Thread.run(Thread.
> java:745)*
> >
> > Rag
> >
>



-- 
Raghav


Help Needed: Leadership Issue upon Kafka Upgrade (ZooKeeper 3.4.9)

2018-05-11 Thread Raghav
Hi

We have a 3 node zk ensemble as well as 3 node Kafka Cluster. They both are
hosted on the same 3 VMs.

Before Restart
1. We were on Kafka 0.10.2.1

After Restart
1. We moved to Kafka 1.1

We observe that Kafkas report leadership issues, and for lot of partitions
Leader is -1. I see some logs in ZK that mainly point towards some
connectivity issue around restart time.

*We are stuck on this one for a while now, and neither rolling restart of
ZK is helping. Can you please help or point us how we can debug this.*

*2018-05-11_17:20:49.00305 2018-05-11 17:20:49,002 [myid:1] - INFO
[WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message
format version), 1 (n.leader), 0x20112 (n.zxid), 0x1 (n.round), LOOKING
(n.state), 1 (n.sid), 0x2 (n.peerEpoch) LOOKING (my
state)2018-05-11_17:20:49.01201
2018-05-11 17:20:49,010 [myid:1] - WARN
[WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 2 at
election address /1.1.1.143:3888

2018-05-11_17:20:49.01203 java.net.ConnectException: Connection
refused
2018-05-11_17:20:49.01203   at
java.net.PlainSocketImpl.socketConnect(Native
Method)
2018-05-11_17:20:49.01203   at java.net
.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
2018-05-11_17:20:49.01203   at java.net
.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
2018-05-11_17:20:49.01204   at java.net
.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
2018-05-11_17:20:49.01204   at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
2018-05-11_17:20:49.01204   at
java.net.Socket.connect(Socket.java:589)
2018-05-11_17:20:49.01204   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
2018-05-11_17:20:49.01204   at
org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354)
2018-05-11_17:20:49.01205   at
org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452)
2018-05-11_17:20:49.01205   at
org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433)
2018-05-11_17:20:49.01206   at java.lang.Thread.run(Thread.java:745)*

Rag


Move to Kafka 1.1 from 0.10.2.x ?

2018-04-05 Thread Raghav
Hi

Are there anything that needs to be taken care for if we want to move from
0.10.2.x to latest 1.1 release ?

Is this stable release and is it recommended for production use ?

Thanks

Raghav


Re: Is Restart needed after change in trust store for Kafka 1.1 ?

2018-03-30 Thread Raghav
Anyone ?

On Thu, Mar 29, 2018 at 6:11 PM, Raghav  wrote:

> Hi
>
> We have a 3 node Kafka cluster running. Time to time, we have some changes
> in trust store and we restart Kafka to take new changes into account. We
> are on Kafka 0.10.x.
>
> If we move to 1.1, would we still need to restart Kafka upon trust store
> changes ?
>
> Thanks.
>
> --
> Raghav
>



-- 
Raghav


Is Restart needed after change in trust store for Kafka 1.1 ?

2018-03-29 Thread Raghav
Hi

We have a 3 node Kafka cluster running. Time to time, we have some changes
in trust store and we restart Kafka to take new changes into account. We
are on Kafka 0.10.x.

If we move to 1.1, would we still need to restart Kafka upon trust store
changes ?

Thanks.

-- 
Raghav


Re: Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2018-03-16 Thread Raghav
Is It recommended to move to 1.0 release if we want to overcome this issue
? Please advise, Ted.

Thanks.

R

On Thu, Mar 15, 2018 at 7:43 PM, Ted Yu  wrote:

> Looking at KAFKA-3702, it is still Open.
>
> FYI
>
> On Thu, Mar 15, 2018 at 5:51 PM, Raghav  wrote:
>
> > I am hitting this issue possible in 10.2.1. Can someone please confirm if
> > this issue was fixed in 10.2.1 or not ?
> >
> > R
> >
> > On Wed, Jun 7, 2017 at 11:50 AM, IT Consultant <0binarybudd...@gmail.com
> >
> > wrote:
> >
> > > Hi All ,
> > >
> > > Thanks a lot for your help .
> > >
> > > A bug has been logged for said issue and can be found at ,
> > >
> > > https://issues.apache.org/jira/browse/KAFKA-5401
> > >
> > >
> > > Thanks again .
> > >
> > > On Sun, Jun 4, 2017 at 6:38 PM, Martin Gainty 
> > wrote:
> > >
> > > >
> > > > 
> > > > From: IT Consultant <0binarybudd...@gmail.com>
> > > > Sent: Friday, June 2, 2017 11:02 AM
> > > > To: users@kafka.apache.org
> > > > Subject: Kafka Over TLS Error - Failed to send SSL Close message -
> > Broken
> > > > Pipe
> > > >
> > > > Hi All,
> > > >
> > > > I have been seeing below error since three days ,
> > > >
> > > > Can you please help me understand more about this ,
> > > >
> > > >
> > > > WARN Failed to send SSL Close message
> > > > (org.apache.kafka.common.network.SslTransportLayer)
> > > > java.io.IOException: Broken pipe
> > > >  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> > > >  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:
> > 47)
> > > >  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> > > >  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> > > >  at sun.nio.ch.SocketChannelImpl.
> write(SocketChannelImpl.java:
> > > 471)
> > > >  at
> > > > org.apache.kafka.common.network.SslTransportLayer.
> > > > flush(SslTransportLayer.java:194)
> > > >
> > > > MG>Here is org.apache.kafka.common.network.SslTransportLayer code:
> > > > /**
> > > > * Flushes the buffer to the network, non blocking
> > > > * @param buf ByteBuffer
> > > > * @return boolean true if the buffer has been emptied out, false
> > > > otherwise
> > > > * @throws IOException
> > > > */
> > > > private boolean flush(ByteBuffer buf) throws IOException {
> > > > int remaining = buf.remaining();
> > > > if (remaining > 0) {
> > > > int written = socketChannel.write(buf); //no check for
> > > > isOpen() *socketChannel.isOpen()*
> > > > return written >= remaining;
> > > > }
> > > > return true;
> > > > }
> > > >
> > > > MG>it appears upstream monitor *container* closed connection but
> kafka
> > > > socketChannel never tested (now-closed) connection with isOpen()
> > > > MG>i think you found a bug
> > > > MG>can you file bug in kafka-jira ?
> > > > https://issues.apache.org/jira/browse/KAFKA/?
> selectedTab=com.atlassian
> > .
> > > > jira.jira-projects-plugin:summary-panel
> > > > Kafka - ASF JIRA - issues.apache.org > > > issues.apache.org/jira/browse/KAFKA/?selectedTab=com.
> > > > atlassian.jira.jira-projects-plugin:summary-panel>
> > > > issues.apache.org
> > > > Atlassian JIRA Project Management Software
> (v6.3.15#6346-sha1:dbc023d)
> > > > About JIRA; Report a problem; Powered by a free Atlassian JIRA open
> > > source
> > > > license for Apache ...
> > > >
> > > >
> > > >
> > > >
> > > >  at
> > > > org.apache.kafka.common.network.SslTransportLayer.
> > > > close(SslTransportLayer.java:148)
> > > >  at
> > > > org.apache.kafka.common.network.KafkaChannel.close(
> > KafkaChannel.java:45)
> > > >  at
> > > > org.apache.kafka.common.network.Selector.close(Selector.java:442)
> > > >  at org.apache.kafka.common.network.Selector.poll(
> > > > Selector.java:310)
> > > >  at
> > > > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
> > > >  at
> > > > org.apache.kafka.clients.producer.internals.Sender.run(
> > Sender.java:216)
> > > >  at
> > > > org.apache.kafka.clients.producer.internals.Sender.run(
> > Sender.java:128)
> > > >  at java.lang.Thread.run(Thread.java:745)
> > > >
> > > >
> > > > Thanks  a lot.
> > > >
> > >
> >
> >
> >
> > --
> > Raghav
> >
>



-- 
Raghav


Re: Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2018-03-15 Thread Raghav
I am hitting this issue possible in 10.2.1. Can someone please confirm if
this issue was fixed in 10.2.1 or not ?

R

On Wed, Jun 7, 2017 at 11:50 AM, IT Consultant <0binarybudd...@gmail.com>
wrote:

> Hi All ,
>
> Thanks a lot for your help .
>
> A bug has been logged for said issue and can be found at ,
>
> https://issues.apache.org/jira/browse/KAFKA-5401
>
>
> Thanks again .
>
> On Sun, Jun 4, 2017 at 6:38 PM, Martin Gainty  wrote:
>
> >
> > 
> > From: IT Consultant <0binarybudd...@gmail.com>
> > Sent: Friday, June 2, 2017 11:02 AM
> > To: users@kafka.apache.org
> > Subject: Kafka Over TLS Error - Failed to send SSL Close message - Broken
> > Pipe
> >
> > Hi All,
> >
> > I have been seeing below error since three days ,
> >
> > Can you please help me understand more about this ,
> >
> >
> > WARN Failed to send SSL Close message
> > (org.apache.kafka.common.network.SslTransportLayer)
> > java.io.IOException: Broken pipe
> >  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> >  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> >  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> >  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> >  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:
> 471)
> >  at
> > org.apache.kafka.common.network.SslTransportLayer.
> > flush(SslTransportLayer.java:194)
> >
> > MG>Here is org.apache.kafka.common.network.SslTransportLayer code:
> > /**
> > * Flushes the buffer to the network, non blocking
> > * @param buf ByteBuffer
> > * @return boolean true if the buffer has been emptied out, false
> > otherwise
> > * @throws IOException
> > */
> > private boolean flush(ByteBuffer buf) throws IOException {
> > int remaining = buf.remaining();
> > if (remaining > 0) {
> > int written = socketChannel.write(buf); //no check for
> > isOpen() *socketChannel.isOpen()*
> > return written >= remaining;
> > }
> > return true;
> > }
> >
> > MG>it appears upstream monitor *container* closed connection but kafka
> > socketChannel never tested (now-closed) connection with isOpen()
> > MG>i think you found a bug
> > MG>can you file bug in kafka-jira ?
> > https://issues.apache.org/jira/browse/KAFKA/?selectedTab=com.atlassian.
> > jira.jira-projects-plugin:summary-panel
> > Kafka - ASF JIRA - issues.apache.org > issues.apache.org/jira/browse/KAFKA/?selectedTab=com.
> > atlassian.jira.jira-projects-plugin:summary-panel>
> > issues.apache.org
> > Atlassian JIRA Project Management Software (v6.3.15#6346-sha1:dbc023d)
> > About JIRA; Report a problem; Powered by a free Atlassian JIRA open
> source
> > license for Apache ...
> >
> >
> >
> >
> >  at
> > org.apache.kafka.common.network.SslTransportLayer.
> > close(SslTransportLayer.java:148)
> >  at
> > org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
> >  at
> > org.apache.kafka.common.network.Selector.close(Selector.java:442)
> >  at org.apache.kafka.common.network.Selector.poll(
> > Selector.java:310)
> >  at
> > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
> >  at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
> >  at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
> >  at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Thanks  a lot.
> >
>



-- 
Raghav


Please help - Failed to send SSL Close message java.io.IOException: Broken pipe

2018-03-15 Thread Raghav
Hi

We have a 3 node secure Kafka Cluster (
https://kafka.apache.org/documentation/#security_ssl)

Recently, my producer client is receiving the below message. Can someone
please help to understand the message and possible few pointers to fix
debug and may be fix this issue.


18/03/15 14:37:23 INFO producer.ProducerConfig:180 ProducerConfig values:
acks = all
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [172.31.166.121:9093, 172.31.166.122:9093,
172.31.166.120:9093]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 54
interceptor.classes = null
key.serializer = class
org.apache.kafka.common.serialization.StringSerializer
linger.ms = 1
max.block.ms = 6
max.in.flight.requests.per.connection = 50
max.request.size = 1048576
metadata.fetch.timeout.ms = 6
metadata.max.age.ms = 30
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 3
partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 3
retries = 3
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 6
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /home/batman/ekc/keystore/keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /home/batman/ekc/keystore/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
timeout.ms = 3
value.serializer = class
org.apache.kafka.common.serialization.StringSerializer

18/03/15 14:37:23 INFO utils.AppInfoParser:83 Kafka version : 0.10.2.1
18/03/15 14:37:23 INFO utils.AppInfoParser:84 Kafka commitId :
e89bffd6b2eff799
18/03/15 14:37:23 INFO ekc.Writer:78 Number of messages to send = 8
18/03/15 14:37:23 WARN network.SslTransportLayer:166 Failed to send SSL
Close message
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at
org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
at
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:150)
at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:731)
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:54)
at org.apache.kafka.common.network.Selector.doClose(Selector.java:540)
at org.apache.kafka.common.network.Selector.close(Selector.java:531)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:378)
at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:225)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:126)
at java.lang.Thread.run(Thread.java:748)
18/03/15 14:37:23 WARN network.SslTransportLayer:166 Failed to send SSL
Close message
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at
org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
at
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:163)
at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:731)
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:54)
at org.apache.kafka.common.network.Selector.doClose(Selector.java:540)
at org.apache.kafka.common.network.Selector.close(Selector.java:531)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:378)
at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:225)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:126)
at java.lang.Thread.run(Thread.java:748)


R


Re: log.retention.bytes not working as expected

2018-02-06 Thread Raghav
By the way, is there a bug that was fixed in the later release.
https://issues.apache.org/jira/browse/KAFKA-6030

Can you please confirm ?

On Tue, Feb 6, 2018 at 1:38 PM, Ted Yu  wrote:

> The log cleaner abortion in the log file preceded log deletion.
>
> On Tue, Feb 6, 2018 at 1:36 PM, Raghav  wrote:
>
> > Ted
> >
> > Sorry, I did not understand your point here.
> >
> > On Tue, Feb 6, 2018 at 1:09 PM, Ted Yu  wrote:
> >
> > > bq. but is aborted.
> > >
> > > See the following in LogManager#asyncDelete():
> > >
> > >   //We need to wait until there is no more cleaning task on the log
> > to
> > > be deleted before actually deleting it.
> > >
> > >   if (cleaner != null && !isFuture) {
> > >
> > > cleaner.abortCleaning(topicPartition)
> > >
> > > FYI
> > >
> > > On Tue, Feb 6, 2018 at 12:56 PM, Raghav  wrote:
> > >
> > > > From the log-cleaner.log, I see the following. It seems like it
> resume
> > > but
> > > > is aborted. Not sure how to read this:
> > > >
> > > >
> > > > [2018-02-06 18:06:22,178] INFO Compaction for partition topic043-27
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,178] INFO The cleaning for partition topic043-27
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,182] INFO The cleaning for partition topic043-51
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,182] INFO Compaction for partition topic043-51
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,182] INFO The cleaning for partition topic043-51
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,190] INFO The cleaning for partition topic043-52
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,190] INFO Compaction for partition topic043-52
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,190] INFO The cleaning for partition topic043-52
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,192] INFO The cleaning for partition topic043-45
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,192] INFO Compaction for partition topic043-45
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,192] INFO The cleaning for partition topic043-45
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,198] INFO The cleaning for partition topic043-20
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,198] INFO Compaction for partition topic043-20
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,198] INFO The cleaning for partition topic043-20
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,204] INFO The cleaning for partition topic043-63
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,204] INFO Compaction for partition topic043-63
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,204] INFO The cleaning for partition topic043-63
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,209] INFO The cleaning for partition topic043-44
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,209] INFO Compaction for partition topic043-44
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,209] INFO The cleaning for partition topic043-44
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,214] INFO The cleaning for partition topic043-38
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,214] INFO Compaction for partition topic043-38
> is
> > > > resumed (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,214] INFO The cleaning for partition topic043-38
> > is
> > > > aborted (kafka.log.LogCleaner)
> > > > [2018-02-06 18:06:22,219] INFO The cleaning for partition topic043-50
> > is
> > > > aborted and paused (kafka.log.LogCleaner)
> > >

Re: log.retention.bytes not working as expected

2018-02-06 Thread Raghav
Ted

Sorry, I did not understand your point here.

On Tue, Feb 6, 2018 at 1:09 PM, Ted Yu  wrote:

> bq. but is aborted.
>
> See the following in LogManager#asyncDelete():
>
>   //We need to wait until there is no more cleaning task on the log to
> be deleted before actually deleting it.
>
>   if (cleaner != null && !isFuture) {
>
> cleaner.abortCleaning(topicPartition)
>
> FYI
>
> On Tue, Feb 6, 2018 at 12:56 PM, Raghav  wrote:
>
> > From the log-cleaner.log, I see the following. It seems like it resume
> but
> > is aborted. Not sure how to read this:
> >
> >
> > [2018-02-06 18:06:22,178] INFO Compaction for partition topic043-27 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,178] INFO The cleaning for partition topic043-27 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,182] INFO The cleaning for partition topic043-51 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,182] INFO Compaction for partition topic043-51 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,182] INFO The cleaning for partition topic043-51 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,190] INFO The cleaning for partition topic043-52 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,190] INFO Compaction for partition topic043-52 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,190] INFO The cleaning for partition topic043-52 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,192] INFO The cleaning for partition topic043-45 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,192] INFO Compaction for partition topic043-45 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,192] INFO The cleaning for partition topic043-45 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,198] INFO The cleaning for partition topic043-20 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,198] INFO Compaction for partition topic043-20 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,198] INFO The cleaning for partition topic043-20 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,204] INFO The cleaning for partition topic043-63 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,204] INFO Compaction for partition topic043-63 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,204] INFO The cleaning for partition topic043-63 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,209] INFO The cleaning for partition topic043-44 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,209] INFO Compaction for partition topic043-44 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,209] INFO The cleaning for partition topic043-44 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,214] INFO The cleaning for partition topic043-38 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,214] INFO Compaction for partition topic043-38 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,214] INFO The cleaning for partition topic043-38 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,219] INFO The cleaning for partition topic043-50 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,219] INFO Compaction for partition topic043-50 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,219] INFO The cleaning for partition topic043-50 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,224] INFO The cleaning for partition topic043-2 is
> > aborted and paused (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,224] INFO Compaction for partition topic043-2 is
> > resumed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:22,224] INFO The cleaning for partition topic043-2 is
> > aborted (kafka.log.LogCleaner)
> > [2018-02-06 18:06:54,643] INFO Shutting down the log cleaner.
> > (kafka.log.LogCleaner)
> > [2018-02-06 18:06:54,643] INFO [kafka-log-cleaner-thread-0], Shutting
> down
> > (kafka.log.LogCleaner)
> > [2018-02-06 18:06:54,644] INFO [kafka-log-cleaner-thread-0], Shutdown
> > completed (kafka.log.LogCleaner)
> > [2018-02-06 18:06:54,644] INFO [kafka-log-cleaner-thread-0], Stopped
> >  (kafka.log.LogCleaner)
> > [2018-02-06 18:06:57,663] INFO Starting the log cleaner
> > (kafka.log.LogCleaner)
> > [2018-02-06 18:06:57,665] INFO [kafka-log-cleaner-thread-0], Starting
>

Re: log.retention.bytes not working as expected

2018-02-06 Thread Raghav
>From the log-cleaner.log, I see the following. It seems like it resume but
is aborted. Not sure how to read this:


[2018-02-06 18:06:22,178] INFO Compaction for partition topic043-27 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,178] INFO The cleaning for partition topic043-27 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,182] INFO The cleaning for partition topic043-51 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,182] INFO Compaction for partition topic043-51 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,182] INFO The cleaning for partition topic043-51 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,190] INFO The cleaning for partition topic043-52 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,190] INFO Compaction for partition topic043-52 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,190] INFO The cleaning for partition topic043-52 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,192] INFO The cleaning for partition topic043-45 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,192] INFO Compaction for partition topic043-45 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,192] INFO The cleaning for partition topic043-45 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,198] INFO The cleaning for partition topic043-20 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,198] INFO Compaction for partition topic043-20 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,198] INFO The cleaning for partition topic043-20 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,204] INFO The cleaning for partition topic043-63 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,204] INFO Compaction for partition topic043-63 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,204] INFO The cleaning for partition topic043-63 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,209] INFO The cleaning for partition topic043-44 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,209] INFO Compaction for partition topic043-44 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,209] INFO The cleaning for partition topic043-44 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,214] INFO The cleaning for partition topic043-38 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,214] INFO Compaction for partition topic043-38 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,214] INFO The cleaning for partition topic043-38 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,219] INFO The cleaning for partition topic043-50 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,219] INFO Compaction for partition topic043-50 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,219] INFO The cleaning for partition topic043-50 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:22,224] INFO The cleaning for partition topic043-2 is
aborted and paused (kafka.log.LogCleaner)
[2018-02-06 18:06:22,224] INFO Compaction for partition topic043-2 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,224] INFO The cleaning for partition topic043-2 is
aborted (kafka.log.LogCleaner)
[2018-02-06 18:06:54,643] INFO Shutting down the log cleaner.
(kafka.log.LogCleaner)
[2018-02-06 18:06:54,643] INFO [kafka-log-cleaner-thread-0], Shutting down
(kafka.log.LogCleaner)
[2018-02-06 18:06:54,644] INFO [kafka-log-cleaner-thread-0], Shutdown
completed (kafka.log.LogCleaner)
[2018-02-06 18:06:54,644] INFO [kafka-log-cleaner-thread-0], Stopped
 (kafka.log.LogCleaner)
[2018-02-06 18:06:57,663] INFO Starting the log cleaner
(kafka.log.LogCleaner)
[2018-02-06 18:06:57,665] INFO [kafka-log-cleaner-thread-0], Starting
 (kafka.log.LogCleaner)
[2018-02-06 18:08:07,187] INFO Shutting down the log cleaner.
(kafka.log.LogCleaner)
[2018-02-06 18:08:07,187] INFO [kafka-log-cleaner-thread-0], Shutting down
(kafka.log.LogCleaner)
[2018-02-06 18:08:07,187] INFO [kafka-log-cleaner-thread-0], Stopped
 (kafka.log.LogCleaner)
[2018-02-06 18:08:07,187] INFO [kafka-log-cleaner-thread-0], Shutdown
completed (kafka.log.LogCleaner)
[2018-02-06 18:08:11,701] INFO Starting the log cleaner
(kafka.log.LogCleaner)
[2018-02-06 18:08:11,703] INFO [kafka-log-cleaner-thread-0], Starting
 (kafka.log.LogCleaner)


Re: log.retention.bytes not working as expected

2018-02-06 Thread Raghav
Linux. CentOS.

On Tue, Feb 6, 2018 at 12:26 PM, M. Manna  wrote:

> Is this Windows or Linux?
>
> On 6 Feb 2018 8:24 pm, "Raghav"  wrote:
>
> > Hi
> >
> > While configuring a topic, we are specifying the retention bytes per
> topic
> > as follows. Our retention time in hours is 48.
> >
> > *bin/kafka-topics.sh, --zookeeper zk-1:2181,zk-2:2181,zk-3:2181 --create
> > --topic AmazingTopic --replication-factor 2 --partitions 64 --config
> > retention.bytes=16106127360 --force*
> >
> > According to Kafka documentation, when either of the condition is met,
> the
> > deletion of the log happens from the rear end.
> >
> > Can anyone tell as to why retention.bytes is not working ?
> >
> > Here is the relevant Kafka config:
> >
> >
> > # The minimum age of a log file to be eligible for deletion
> > *log.retention.hours=48*
> >
> > # A size-based retention policy for logs. Segments are pruned from the
> log
> > as long as the remaining
> > # segments don't drop below log.retention.bytes.
> > *log.retention.bytes=42949672960*
> >
> > # The maximum size of a log segment file. When this size is reached a new
> > log segment will be created.
> > *log.segment.bytes=1073741824*
> >
> > # The interval at which log segments are checked to see if they can be
> > deleted according
> > # to the retention policies
> > *log.retention.check.interval.ms
> > <http://log.retention.check.interval.ms>=30*
> >
> > # By default the log cleaner is disabled and the log retention policy
> will
> > default to just delete segments after their retention expires.
> > # If log.cleaner.enable=true is set the cleaner will be enabled and
> > individual logs can then be marked for log compaction.
> > *log.cleaner.enable=true*
> >
> > Thanks.
> >
>



-- 
Raghav


Re: Kafka per topic retention.bytes and global log.retention.bytes not working

2018-02-06 Thread Raghav
We are on Kafka 10.2.1 and facing similar issue. Upgrading to 1.0 is
disruptive. Any other way this can be circumvented ?

Thanks.

On Fri, Jan 12, 2018 at 1:24 AM, Wim Van Leuven <
wim.vanleu...@highestpoint.biz> wrote:

> awesome!
>
> On Thu, 11 Jan 2018 at 23:48 Thunder Stumpges 
> wrote:
>
> > Thanks, yes we upgraded to 1.0.0 and that has indeed fixed the issue.
> > Thanks for the pointer!
> > -Thunder
> >
> >
> > On Tue, Jan 9, 2018 at 9:50 PM Wim Van Leuven <
> > wim.vanleu...@highestpoint.biz> wrote:
> >
> > > Upgrade?
> > >
> > > On Wed, Jan 10, 2018, 00:26 Thunder Stumpges <
> thunder.stump...@gmail.com
> > >
> > > wrote:
> > >
> > > > How would I know if we are seeing that issue? We are running 0.11.0.0
> > so
> > > we
> > > > would not have this fix.
> > > >
> > > > On Tue, Jan 9, 2018 at 11:07 AM Wim Van Leuven <
> > > > wim.vanleu...@highestpoint.biz> wrote:
> > > >
> > > > > What minor version of Kafka are you running? Might you be impacted
> by
> > > > > https://issues.apache.org/jira/browse/KAFKA-6030?
> > > > > -w
> > > > >
> > > > > On Tue, 9 Jan 2018 at 19:02 Thunder Stumpges <
> > > thunder.stump...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hello, I posted this on StackOverflow
> > > > > > <
> > > > > >
> > > > >
> > > >
> > >
> > https://stackoverflow.com/questions/47948399/kafka-per-
> topic-retention-bytes-and-global-log-retention-bytes-not-working
> > > > > > >also
> > > > > > but haven't gotten any response.
> > > > > >
> > > > > > thanks in advance,
> > > > > > Thunder
> > > > > > __
> > > > > >
> > > > > > We are running a 6 node cluster of kafka 0.11.0. We have set a
> > global
> > > > as
> > > > > > well as a per-topic retention in bytes, neither of which is being
> > > > > applied.
> > > > > > There are no errors that I can see in the logs, just nothing
> being
> > > > > deleted
> > > > > > (by size; the time retention does seem to be working)
> > > > > >
> > > > > > See relevant configs below:
> > > > > >
> > > > > > *./config/server.properties* :
> > > > > >
> > > > > > # global retention 75GB or 60 days, segment size 512MB
> > > > > > log.retention.bytes=750
> > > > > > log.retention.check.interval.ms=6
> > > > > >
> > > > > > log.retention.hours=1440
> > > > > >
> > > > > > log.cleanup.policy=delete
> > > > > >
> > > > > > log.segment.bytes=536870912
> > > > > >
> > > > > > *topic configuration (30GB):*
> > > > > >
> > > > > > [tstumpges@kafka-02 kafka]$ bin/kafka-topics.sh  --zookeeper
> > > > > > zk-01:2181/kafka --describe --topic stg_logtopic
> > > > > > Topic:stg_logtopicPartitionCount:12   ReplicationFactor:3
> > > > > > Configs:retention.bytes=300
> > > > > > Topic: stg_logtopic   Partition: 0Leader: 4
> > > > > > Replicas: 4,5,6 Isr: 4,5,6
> > > > > > Topic: stg_logtopic   Partition: 1Leader: 5
> > > > > > Replicas: 5,6,1 Isr: 5,1,6
> > > > > > ...
> > > > > >
> > > > > > And, disk usage showing 910GB usage for one partition!
> > > > > >
> > > > > > [tstumpges@kafka-02 kafka]$ sudo du -s -h /data1/kafka-data/*
> > > > > > 82G /data1/kafka-data/stg_logother3-2
> > > > > > 155G/data1/kafka-data/stg_logother2-9
> > > > > > 169G/data1/kafka-data/stg_logother1-6
> > > > > > 910G/data1/kafka-data/stg_logtopic-4
> > > > > >
> > > > > > I can see there are plenty of segment log files (512MB each) in
> the
> > > > > > partition directory... what is going on?!
> > > > > >
> > > > > > Thanks in advance, Thunder
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 
Raghav


log.retention.bytes not working as expected

2018-02-06 Thread Raghav
Hi

While configuring a topic, we are specifying the retention bytes per topic
as follows. Our retention time in hours is 48.

*bin/kafka-topics.sh, --zookeeper zk-1:2181,zk-2:2181,zk-3:2181 --create
--topic AmazingTopic --replication-factor 2 --partitions 64 --config
retention.bytes=16106127360 --force*

According to Kafka documentation, when either of the condition is met, the
deletion of the log happens from the rear end.

Can anyone tell as to why retention.bytes is not working ?

Here is the relevant Kafka config:


# The minimum age of a log file to be eligible for deletion
*log.retention.hours=48*

# A size-based retention policy for logs. Segments are pruned from the log
as long as the remaining
# segments don't drop below log.retention.bytes.
*log.retention.bytes=42949672960*

# The maximum size of a log segment file. When this size is reached a new
log segment will be created.
*log.segment.bytes=1073741824*

# The interval at which log segments are checked to see if they can be
deleted according
# to the retention policies
*log.retention.check.interval.ms
=30*

# By default the log cleaner is disabled and the log retention policy will
default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and
individual logs can then be marked for log compaction.
*log.cleaner.enable=true*

Thanks.


Re: Question about disks and log.dirs

2017-11-30 Thread Raghav
Can someone please help here ?

On Thu, Nov 23, 2017 at 10:42 AM, Raghav  wrote:

> Anyone here ?
>
> On Wed, Nov 22, 2017 at 4:04 PM, Raghav  wrote:
>
>> Hi
>>
>> If I give several locations with smaller capacity for log.dirs vs one
>> large drive for log.dirs, are there any PROS or CONS between the two
>> (assuming total storage is same in both cases).
>>
>> I don't have access to one drive for log.dirs, but several smaller
>> directories. I just want to ensure that there are no issues.
>>
>> Thanks.
>>
>> --
>> Raghav
>>
>
>
>
> --
> Raghav
>



-- 
Raghav


Re: Question about disks and log.dirs

2017-11-23 Thread Raghav
Anyone here ?

On Wed, Nov 22, 2017 at 4:04 PM, Raghav  wrote:

> Hi
>
> If I give several locations with smaller capacity for log.dirs vs one
> large drive for log.dirs, are there any PROS or CONS between the two
> (assuming total storage is same in both cases).
>
> I don't have access to one drive for log.dirs, but several smaller
> directories. I just want to ensure that there are no issues.
>
> Thanks.
>
> --
> Raghav
>



-- 
Raghav


Question about disks and log.dirs

2017-11-22 Thread Raghav
Hi

If I give several locations with smaller capacity for log.dirs vs one large
drive for log.dirs, are there any PROS or CONS between the two (assuming
total storage is same in both cases).

I don't have access to one drive for log.dirs, but several smaller
directories. I just want to ensure that there are no issues.

Thanks.

-- 
Raghav


Re: Kafka Internals Video/Blog

2017-09-20 Thread Raghav
Thanks Manna.  I did watch that earlier. I am looking for something more
detailed in terms of implementation and design details.

I want to dabble with the code but given the complexity of the code, a good
starter guide would be helpful.

Thanks.

On Wed, Sep 20, 2017 at 9:53 AM, M. Manna  wrote:

> There's a video where Jay Kreps talks about how Kafka works - YouTube has
> it as the top 5 under "How Kafka Works".
>
>
> On 20 Sep 2017 5:49 pm, "Raghav"  wrote:
>
> > Hi
> >
> > Just wondering if there is any video/blog that goes over Kafka Internal
> and
> > under the hood design and implementation details. I am a newbie and I
> would
> > like to dabble with the code and understand design of it. Just wondering
> if
> > there is any video, blog etc that goes over it ?
> >
> > Thanks.
> >
> > --
> > Raghav
> >
>



-- 
Raghav


Kafka Internals Video/Blog

2017-09-20 Thread Raghav
Hi

Just wondering if there is any video/blog that goes over Kafka Internal and
under the hood design and implementation details. I am a newbie and I would
like to dabble with the code and understand design of it. Just wondering if
there is any video, blog etc that goes over it ?

Thanks.

-- 
Raghav


Re: Kafka Summit 2017 San Francisco Talks

2017-09-18 Thread Raghav
Thanks, Guozhang.

On Mon, Sep 18, 2017 at 5:23 PM, Guozhang Wang  wrote:

> It is available online now:
> https://www.confluent.io/kafka-summit-sf17/resource/
>
>
> Guozhang
>
> On Tue, Sep 19, 2017 at 8:13 AM, Raghav  wrote:
>
> > Hi
> >
> > Just wondering if the videos are available anywhere from Kafka Summit
> 2017
> > to watch ?
> >
> > --
> > Raghav
> >
>
>
>
> --
> -- Guozhang
>



-- 
Raghav


Kafka Summit 2017 San Francisco Talks

2017-09-18 Thread Raghav
Hi

Just wondering if the videos are available anywhere from Kafka Summit 2017
to watch ?

-- 
Raghav


Re: Reduce Kafka Client logging

2017-09-08 Thread Raghav
Thanks, Kamal.

On Fri, Sep 8, 2017 at 4:10 AM, Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> add this lines at the end of your log4j.properties,
>
> log4j.logger.org.apache.kafka.clients.producer=WARN
>
> On Thu, Sep 7, 2017 at 5:27 PM, Raghav  wrote:
>
> > Hi Viktor
> >
> > Can you pleas share the log4j config snippet that I should use. My Java
> > code's current log4j looks like this. How should I add this new entry
> that
> > you mentioned ? Thanks.
> >
> >
> > log4j.rootLogger=INFO, STDOUT
> >
> > log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
> > log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
> > log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
> > %c{2}:%L %m%n
> >
> > log4j.appender.file=org.apache.log4j.RollingFileAppender
> > log4j.appender.file.File=logfile.log
> > log4j.appender.file.layout=org.apache.log4j.PatternLayout
> > log4j.appender.file.layout.ConversionPattern=%d{dd-MM- HH:mm:ss}
> %-5p
> > %c{1}:%L - %m%n
> >
> > On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi 
> > wrote:
> >
> > > Hi Raghav,
> > >
> > > I think it is enough to raise the logging level
> > > of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> > > Also I'd like to mention that if possible, don't recreate the Kafka
> > > producer each time. The protocol is designed for long-living
> connections
> > > and recreating the connection each time puts pressure on the TCP layer
> > (the
> > > connection is expensive) and also on Kafka as well which may result in
> > > broker failures (typically exceeding the maximum allowed number of file
> > > descriptors).
> > >
> > > HTH,
> > > Viktor
> > >
> > > On Thu, Sep 7, 2017 at 7:35 AM, Raghav  wrote:
> > >
> > > > Due to the nature of code, I have to open a connection to a different
> > > Kafka
> > > > broker each time, and send one message. We have several Kafka
> brokers.
> > So
> > > > my client log is full with the following logs. What log settings
> > should I
> > > > use in log4j just for Kafka producer logs ?
> > > >
> > > >
> > > > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig
> > values:
> > > > acks = all
> > > > batch.size = 16384
> > > > block.on.buffer.full = false
> > > > bootstrap.servers = [10.10.10.5:]
> > > > buffer.memory = 33554432
> > > > client.id =
> > > > compression.type = none
> > > > connections.max.idle.ms = 54
> > > > interceptor.classes = null
> > > > key.serializer = class
> > > > org.apache.kafka.common.serialization.StringSerializer
> > > > linger.ms = 1
> > > > max.block.ms = 5000
> > > > max.in.flight.requests.per.connection = 5
> > > > max.request.size = 1048576
> > > > metadata.fetch.timeout.ms = 6
> > > > metadata.max.age.ms = 30
> > > > metric.reporters = []
> > > > metrics.num.samples = 2
> > > > metrics.sample.window.ms = 3
> > > > partitioner.class = class
> > > > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> > > > receive.buffer.bytes = 32768
> > > > reconnect.backoff.ms = 50
> > > > request.timeout.ms = 5000
> > > > retries = 0
> > > > retry.backoff.ms = 100
> > > > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > > > sasl.kerberos.min.time.before.relogin = 6
> > > > sasl.kerberos.service.name = null
> > > > sasl.kerberos.ticket.renew.jitter = 0.05
> > > > sasl.kerberos.ticket.renew.window.factor = 0.8
> > > > sasl.mechanism = GSSAPI
> > > > security.protocol = PLAINTEXT
> > > > send.buffer.bytes = 131072
> > > > ssl.cipher.suites = null
> > > > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > > > ssl.endpoint.identification.algorithm = null
> > > > ssl.key.password = null
> > > > ssl.keymanager.algorithm = SunX509
> > > > ssl.keystore.location = null
> &

Re: Reduce Kafka Client logging

2017-09-07 Thread Raghav
Hi Viktor

Can you pleas share the log4j config snippet that I should use. My Java
code's current log4j looks like this. How should I add this new entry that
you mentioned ? Thanks.


log4j.rootLogger=INFO, STDOUT

log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
%c{2}:%L %m%n

log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=logfile.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{dd-MM- HH:mm:ss} %-5p
%c{1}:%L - %m%n

On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi 
wrote:

> Hi Raghav,
>
> I think it is enough to raise the logging level
> of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> Also I'd like to mention that if possible, don't recreate the Kafka
> producer each time. The protocol is designed for long-living connections
> and recreating the connection each time puts pressure on the TCP layer (the
> connection is expensive) and also on Kafka as well which may result in
> broker failures (typically exceeding the maximum allowed number of file
> descriptors).
>
> HTH,
> Viktor
>
> On Thu, Sep 7, 2017 at 7:35 AM, Raghav  wrote:
>
> > Due to the nature of code, I have to open a connection to a different
> Kafka
> > broker each time, and send one message. We have several Kafka brokers. So
> > my client log is full with the following logs. What log settings should I
> > use in log4j just for Kafka producer logs ?
> >
> >
> > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
> > acks = all
> > batch.size = 16384
> > block.on.buffer.full = false
> > bootstrap.servers = [10.10.10.5:]
> > buffer.memory = 33554432
> > client.id =
> > compression.type = none
> > connections.max.idle.ms = 54
> > interceptor.classes = null
> > key.serializer = class
> > org.apache.kafka.common.serialization.StringSerializer
> > linger.ms = 1
> > max.block.ms = 5000
> > max.in.flight.requests.per.connection = 5
> > max.request.size = 1048576
> > metadata.fetch.timeout.ms = 6
> > metadata.max.age.ms = 30
> > metric.reporters = []
> > metrics.num.samples = 2
> > metrics.sample.window.ms = 3
> > partitioner.class = class
> > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> > receive.buffer.bytes = 32768
> > reconnect.backoff.ms = 50
> > request.timeout.ms = 5000
> > retries = 0
> > retry.backoff.ms = 100
> > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > sasl.kerberos.min.time.before.relogin = 6
> > sasl.kerberos.service.name = null
> > sasl.kerberos.ticket.renew.jitter = 0.05
> > sasl.kerberos.ticket.renew.window.factor = 0.8
> > sasl.mechanism = GSSAPI
> > security.protocol = PLAINTEXT
> > send.buffer.bytes = 131072
> > ssl.cipher.suites = null
> > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > ssl.endpoint.identification.algorithm = null
> > ssl.key.password = null
> > ssl.keymanager.algorithm = SunX509
> > ssl.keystore.location = null
> > ssl.keystore.password = null
> > ssl.keystore.type = JKS
> > ssl.protocol = TLS
> > ssl.provider = null
> > ssl.secure.random.implementation = null
> > ssl.trustmanager.algorithm = PKIX
> > ssl.truststore.location = null
> >     ssl.truststore.password = null
> > ssl.truststore.type = JKS
> > timeout.ms = 3
> > value.serializer = class
> > org.apache.kafka.common.serialization.StringSerializer
> >
> > On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai 
> > wrote:
> >
> > > Can you post the exact log messages that you are seeing?
> > >
> > > -Jaikiran
> > >
> > >
> > >
> > > On 07/09/17 7:55 AM, Raghav wrote:
> > >
> > >> Hi
> > >>
> > >> My Java code produces Kafka config overtime it does a send which makes
> > log
> > >> very very verbose.
> > >>
> > >> How can I reduce the Kafka client (producer) logging in my java code ?
> > >>
> > >> Thanks for your help.
> > >>
> > >>
> > >
> >
> >
> > --
> > Raghav
> >
>



-- 
Raghav


Re: Reduce Kafka Client logging

2017-09-06 Thread Raghav
Due to the nature of code, I have to open a connection to a different Kafka
broker each time, and send one message. We have several Kafka brokers. So
my client log is full with the following logs. What log settings should I
use in log4j just for Kafka producer logs ?


17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
acks = all
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [10.10.10.5:]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 54
interceptor.classes = null
key.serializer = class
org.apache.kafka.common.serialization.StringSerializer
linger.ms = 1
max.block.ms = 5000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 6
metadata.max.age.ms = 30
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 3
partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 5000
retries = 0
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 6
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 3
value.serializer = class
org.apache.kafka.common.serialization.StringSerializer

On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai 
wrote:

> Can you post the exact log messages that you are seeing?
>
> -Jaikiran
>
>
>
> On 07/09/17 7:55 AM, Raghav wrote:
>
>> Hi
>>
>> My Java code produces Kafka config overtime it does a send which makes log
>> very very verbose.
>>
>> How can I reduce the Kafka client (producer) logging in my java code ?
>>
>> Thanks for your help.
>>
>>
>


-- 
Raghav


Reduce Kafka Client logging

2017-09-06 Thread Raghav
Hi

My Java code produces Kafka config overtime it does a send which makes log
very very verbose.

How can I reduce the Kafka client (producer) logging in my java code ?

Thanks for your help.

-- 
Raghav


Re: How to debug - NETWORK_EXCEPTION

2017-08-31 Thread Raghav
Kafka Brokers only. Clients were Java client that used the same client
version as the broker.

On Thu, Aug 31, 2017 at 5:43 AM, Saravanan Tirugnanum 
wrote:

> Thank you Raghav. Was it like you upgraded Kafka Broker or Clients or both.
>
> Regards
> Saravanan
>
> On Wednesday, August 30, 2017 at 6:31:34 PM UTC-5, Raghav wrote:
>>
>> I was never able to debug this exception. I, unfortunately, moved to
>> Apache Kafka 10.2.1 from Confluent 3.2.1 and this issue went away.
>>
>> On Wed, Aug 30, 2017 at 11:46 AM, Saravanan Tirugnanum > > wrote:
>>
>>> Hi
>>>
>>> Were you able to figure out this issue? Any clue ?
>>>
>>> Regards
>>> Saravanan
>>>
>>>
>>> On Wednesday, August 9, 2017 at 11:51:19 PM UTC-5, Raghav wrote:
>>>>
>>>> Hi
>>>>
>>>> I am sending very small 32 byte message to Kafka broker in a tight loop
>>>> with 250ms sleep. I have one broker, 1 partition, and replication factor =
>>>> 1.
>>>>
>>>> After about 4200 messages, I get *following *error pasted below.
>>>>
>>>> How can I debug this error ? Can you please throw some ideas for me to
>>>> debug ? Stuck on this for a while now. Need help here.
>>>>
>>>> 17/08/10 04:46:58 WARN internals.Sender:307 Got error produce response
>>>> with correlation id 4205 on topic-partition topic04-0, retrying (2 attempts
>>>> left). Error: NETWORK_EXCEPTI
>>>> ON
>>>> 17/08/10 04:47:29 WARN internals.Sender:307 Got error produce response
>>>> with correlation id 4207 on topic-partition topic04-0, retrying (1 attempts
>>>> left). Error: NETWORK_EXCEPT$ON
>>>> 17/08/10 04:47:59 WARN internals.Sender:307 Got error produce response
>>>> with correlation id 4209 on topic-partition topic04-0, retrying (0 attempts
>>>> left). Error: NETWORK_EXCEPT$ON
>>>> java.util.concurrent.ExecutionException: 
>>>> org.apache.kafka.common.errors.NetworkException:
>>>> The server disconnected before a response was received.
>>>> at org.apache.kafka.clients.producer.internals.FutureRecordMeta
>>>> data.valueOrError(FutureRecordMetadata.java:65)
>>>> at org.apache.kafka.clients.producer.internals.FutureRecordMeta
>>>> data.get(FutureRecordMetadata.java:52)
>>>> at org.apache.kafka.clients.producer.internals.FutureRecordMeta
>>>> data.get(FutureRecordMetadata.java:25)
>>>>
>>>> --
>>>> Raghav
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Confluent Platform" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to confluent-platform+unsubscr...@googlegroups.com.
>>> To post to this group, send email to confluent...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>> gid/confluent-platform/0d1e67c3-7b5e-4b96-bc2d-bd9ba79d5dc7%
>>> 40googlegroups.com
>>> <https://groups.google.com/d/msgid/confluent-platform/0d1e67c3-7b5e-4b96-bc2d-bd9ba79d5dc7%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> Raghav
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to confluent-platform+unsubscr...@googlegroups.com.
> To post to this group, send email to confluent-platf...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/confluent-platform/8f476b2e-c0a1-40bc-9968-
> f06f042fe780%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/8f476b2e-c0a1-40bc-9968-f06f042fe780%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Raghav


Re: How to debug - NETWORK_EXCEPTION

2017-08-30 Thread Raghav
I was never able to debug this exception. I, unfortunately, moved to Apache
Kafka 10.2.1 from Confluent 3.2.1 and this issue went away.

On Wed, Aug 30, 2017 at 11:46 AM, Saravanan Tirugnanum 
wrote:

> Hi
>
> Were you able to figure out this issue? Any clue ?
>
> Regards
> Saravanan
>
>
> On Wednesday, August 9, 2017 at 11:51:19 PM UTC-5, Raghav wrote:
>>
>> Hi
>>
>> I am sending very small 32 byte message to Kafka broker in a tight loop
>> with 250ms sleep. I have one broker, 1 partition, and replication factor =
>> 1.
>>
>> After about 4200 messages, I get *following *error pasted below.
>>
>> How can I debug this error ? Can you please throw some ideas for me to
>> debug ? Stuck on this for a while now. Need help here.
>>
>> 17/08/10 04:46:58 WARN internals.Sender:307 Got error produce response
>> with correlation id 4205 on topic-partition topic04-0, retrying (2 attempts
>> left). Error: NETWORK_EXCEPTI
>> ON
>> 17/08/10 04:47:29 WARN internals.Sender:307 Got error produce response
>> with correlation id 4207 on topic-partition topic04-0, retrying (1 attempts
>> left). Error: NETWORK_EXCEPT$ON
>> 17/08/10 04:47:59 WARN internals.Sender:307 Got error produce response
>> with correlation id 4209 on topic-partition topic04-0, retrying (0 attempts
>> left). Error: NETWORK_EXCEPT$ON
>> java.util.concurrent.ExecutionException: 
>> org.apache.kafka.common.errors.NetworkException:
>> The server disconnected before a response was received.
>> at org.apache.kafka.clients.producer.internals.FutureRecordMeta
>> data.valueOrError(FutureRecordMetadata.java:65)
>> at org.apache.kafka.clients.producer.internals.FutureRecordMeta
>> data.get(FutureRecordMetadata.java:52)
>> at org.apache.kafka.clients.producer.internals.FutureRecordMeta
>> data.get(FutureRecordMetadata.java:25)
>>
>> --
>> Raghav
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Confluent Platform" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to confluent-platform+unsubscr...@googlegroups.com.
> To post to this group, send email to confluent-platf...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/confluent-platform/0d1e67c3-7b5e-4b96-bc2d-
> bd9ba79d5dc7%40googlegroups.com
> <https://groups.google.com/d/msgid/confluent-platform/0d1e67c3-7b5e-4b96-bc2d-bd9ba79d5dc7%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Raghav


ZooKeeper issues with 3.4.9 0 Please Help

2017-08-20 Thread Raghav
Hi

I am trying to use the zookeeper 3.4.9 build (and jars) from apache
zookeeper's site with Kafka 10.2.1. I want to run a 3 node zk cluster
eventually, but to begin with I have only one node. I created my zoo.cfg as
follows:

tickTime=2000
dataDir=/var/lib/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zoo1:2888:3888


The following are the logs once I start the zookeeper from command line.
You can see that towards the end it is killed. Any help is greatly
appreciated.

$ *java -cp
zookeeper-3.4.9.jar:lib/log4j-1.2.16.jar:lib/slf4j-log4j12-1.6.1.jar:lib/slf4j-api-1.6.1.jar:conf
org.apache.zookeeper.server.quorum.QuorumPeerMain conf/zoo.cfg*

2017-08-20 07:16:22,812 [myid:] - INFO  [main:QuorumPeerConfig@124] -
Reading configuration from: conf/zoo.cfg
2017-08-20 07:16:22,819 [myid:] - INFO  [main:DatadirCleanupManager@78] -
autopurge.snapRetainCount set to 3
2017-08-20 07:16:22,819 [myid:] - INFO  [main:DatadirCleanupManager@79] -
autopurge.purgeInterval set to 0
2017-08-20 07:16:22,819 [myid:] - INFO  [main:DatadirCleanupManager@101] -
Purge task is not scheduled.
2017-08-20 07:16:22,822 [myid:] - WARN  [main:QuorumPeerMain@113] - Either
no config or no quorum defined in config, running  in standalone mode
2017-08-20 07:16:22,890 [myid:] - INFO  [main:QuorumPeerConfig@124] -
Reading configuration from: conf/zoo.cfg
2017-08-20 07:16:22,891 [myid:] - INFO  [main:ZooKeeperServerMain@96] -
Starting server
2017-08-20 07:16:22,900 [myid:] - INFO  [main:Environment@100] - Server
environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:host.name=kafka-1.node.rack1.consul
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:java.version=1.8.0_25
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:java.vendor=Oracle Corporation
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:java.home=/usr/java/jdk1.8.0_25/jre
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:java.class.path=zookeeper-3.4.9.jar:lib/log4j-1.2.16.jar:lib/slf4j-log4j12-1.6.1.jar:lib/slf4j-api-1.6.1.jar:conf
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:java.io.tmpdir=/tmp
2017-08-20 07:16:22,901 [myid:] - INFO  [main:Environment@100] - Server
environment:java.compiler=
2017-08-20 07:16:22,902 [myid:] - INFO  [main:Environment@100] - Server
environment:os.name=Linux
2017-08-20 07:16:22,902 [myid:] - INFO  [main:Environment@100] - Server
environment:os.arch=amd64
2017-08-20 07:16:22,902 [myid:] - INFO  [main:Environment@100] - Server
environment:os.version=3.10.0-514.10.2.el7.x86_64
2017-08-20 07:16:22,902 [myid:] - INFO  [main:Environment@100] - Server
environment:user.name=tetinstall
2017-08-20 07:16:22,902 [myid:] - INFO  [main:Environment@100] - Server
environment:user.home=/home/tetinstall
2017-08-20 07:16:22,903 [myid:] - INFO  [main:Environment@100] - Server
environment:user.dir=/opt/tetration/zookeeper/zookeeper-3.4.9
2017-08-20 07:16:22,908 [myid:] - INFO  [main:ZooKeeperServer@815] -
tickTime set to 2
2017-08-20 07:16:22,908 [myid:] - INFO  [main:ZooKeeperServer@824] -
minSessionTimeout set to -1
2017-08-20 07:16:22,908 [myid:] - INFO  [main:ZooKeeperServer@833] -
maxSessionTimeout set to -1
2017-08-20 07:16:22,915 [myid:] - INFO  [main:NIOServerCnxnFactory@89] -
binding to port 0.0.0.0/0.0.0.0:2181
Killed



-- 
Raghav


Re: Topic Creation fails - Need help

2017-08-18 Thread Raghav
Broker is 100% running. ZK path shows /broker/ids/1

On Fri, Aug 18, 2017 at 1:02 AM, Yang Cui  wrote:

> please use zk client to check the path:/brokers/ids in ZK
>
> 发自我的 iPhone
>
> > 在 2017年8月18日,下午3:14,Raghav  写道:
> >
> > Hi
> >
> > I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka
> 10.2.1.
> > I am trying to create a topic using below command:
> >
> > "bin/kafka-topics.sh --create --zookeeper localhost:2181
> > --replication-factor 1 --partitions 16 --topic topicTest04"
> >
> > It fails with the below error. Just wondering why admin tools don't think
> > that there is any broker available while the broker is up ? Any input is
> > greatly appreciated.
> >
> > *"Error while executing topic command : replication factor: 1 larger than
> > available brokers: 0*
> > *[2017-08-18 07:05:47,813] ERROR
> > org.apache.kafka.common.errors.InvalidReplicationFactorException:
> > replication factor: 1 larger than available brokers: 0*
> > * (kafka.admin.TopicCommand$)"*
> >
> > Thanks.
> >
> > --
> > Raghav
>



-- 
Raghav


Topic Creation fails - Need help

2017-08-18 Thread Raghav
Hi

I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1.
I am trying to create a topic using below command:

"bin/kafka-topics.sh --create --zookeeper localhost:2181
--replication-factor 1 --partitions 16 --topic topicTest04"

It fails with the below error. Just wondering why admin tools don't think
that there is any broker available while the broker is up ? Any input is
greatly appreciated.

*"Error while executing topic command : replication factor: 1 larger than
available brokers: 0*
*[2017-08-18 07:05:47,813] ERROR
org.apache.kafka.common.errors.InvalidReplicationFactorException:
replication factor: 1 larger than available brokers: 0*
* (kafka.admin.TopicCommand$)"*

Thanks.

-- 
Raghav


Re: How to debug - NETWORK_EXCEPTION

2017-08-12 Thread Raghav
Hey Martin

I am using default setting for queue.enqueue.timeout.ms. since I have not
set it my Java client. Network doesn't seem to time out either.

Could I be missing something else ?

On Sat, Aug 12, 2017 at 5:18 AM, Martin Gainty  wrote:

>
>
> ____
> From: Raghav 
> Sent: Thursday, August 10, 2017 12:51 AM
> To: Users; confluent-platf...@googlegroups.com
> Subject: How to debug - NETWORK_EXCEPTION
>
> Hi
>
> I am sending very small 32 byte message to Kafka broker in a tight loop
> with 250ms sleep. I have one broker, 1 partition, and replication factor =
> 1.
>
> After about 4200 messages, I get *following *error pasted below.
>
> How can I debug this error ? Can you please throw some ideas for me to
> debug ? Stuck on this for a while now. Need help here.
>
> 17/08/10 04:46:58 WARN internals.Sender:307 Got error produce response with
> correlation id 4205 on topic-partition topic04-0, retrying (2 attempts
> left). Error: NETWORK_EXCEPTI
> ON
> 17/08/10 04:47:29 WARN internals.Sender:307 Got error produce response with
> correlation id 4207 on topic-partition topic04-0, retrying (1 attempts
> left). Error: NETWORK_EXCEPT$ON
> 17/08/10 04:47:59 WARN internals.Sender:307 Got error produce response with
> correlation id 4209 on topic-partition topic04-0, retrying (0 attempts
> left). Error: NETWORK_EXCEPT$ON
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.NetworkException: The server disconnected
> before a response was received.
> at
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.
> valueOrError(FutureRecordMetadata.java:65)
> MG>i would check one of 2 possibilities :
> MG>A)your network is timing out
> MG>B)your message is queue is full
> MG>i would check timeout first so
> MG>in producer.properties check queue.enqueue.timeout.ms=
> MG>https://kafka.apache.org/08/documentation.html
> Apache Kafka<https://kafka.apache.org/08/documentation.html>
> kafka.apache.org
> Each partition is an ordered, immutable sequence of messages that is
> continually appended to—a commit log. The messages in the partitions are
> each assigned a ...
>
> MG>if you have zookeeper enabled check zookeeper.connection.timeout.ms in
> both server.properties and consumer.properties
>
> MG>check consumer.timeout.ms in consumer.properties
>
> at
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(
> FutureRecordMetadata.java:52)
> at
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(
> FutureRecordMetadata.java:25)
>
> --
> Raghav
>



-- 
Raghav


Issues due to Running multiple Kafka clusters using the same zookeeper service

2017-08-11 Thread Raghav
=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@662022ab,
request=RequestSend(header={api_key=10,api_version=0,correlation_id=495,client_id=consumer-1},
body={group_id=ConsumerGroup05}), createdTimeMs=1502479584961,
sendTimeMs=1502479584961),
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
17/08/11 19:26:24 DEBUG internals.AbstractCoordinator:563 Group coordinator
lookup for group ConsumerGroup05 failed: The group coordinator is not
available.

Thanks.

-- 
Raghav


How to debug - NETWORK_EXCEPTION

2017-08-09 Thread Raghav
Hi

I am sending very small 32 byte message to Kafka broker in a tight loop
with 250ms sleep. I have one broker, 1 partition, and replication factor =
1.

After about 4200 messages, I get *following *error pasted below.

How can I debug this error ? Can you please throw some ideas for me to
debug ? Stuck on this for a while now. Need help here.

17/08/10 04:46:58 WARN internals.Sender:307 Got error produce response with
correlation id 4205 on topic-partition topic04-0, retrying (2 attempts
left). Error: NETWORK_EXCEPTI
ON
17/08/10 04:47:29 WARN internals.Sender:307 Got error produce response with
correlation id 4207 on topic-partition topic04-0, retrying (1 attempts
left). Error: NETWORK_EXCEPT$ON
17/08/10 04:47:59 WARN internals.Sender:307 Got error produce response with
correlation id 4209 on topic-partition topic04-0, retrying (0 attempts
left). Error: NETWORK_EXCEPT$ON
java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.NetworkException: The server disconnected
before a response was received.
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:65)
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52)
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)

-- 
Raghav


Re: How to perform keytool operation using Java code

2017-07-13 Thread Raghav
Thanks.

On Thu, Jul 13, 2017 at 2:41 AM, Rajini Sivaram 
wrote:

> Hi Raghav,
>
> You could take a look at https://github.com/apache/
> kafka/blob/trunk/clients/src/test/java/org/apache/kafka/
> test/TestSslUtils.java
>
> Regards,
>
> Rajini
>
> On Wed, Jul 12, 2017 at 10:23 PM, Raghav  wrote:
>
>> Guys, Would anyone know about it ?
>>
>> On Tue, Jul 11, 2017 at 6:20 AM, Raghav  wrote:
>>
>>> Hi
>>>
>>> I followed https://kafka.apache.org/documentation/#security to create
>>> keystore and trust store using Java Keytool. Now, I am looking to do the
>>> same stuff programmatically using Java. I am struggling to find the right
>>> Java classes to perform following operations:
>>>
>>> 1. How to extract CSR from a keystore using Java classes ?
>>>
>>> 2. How to add a CA cert to a keystore using Java classes ?
>>>
>>> I tried to following http://docs.oracle.com/javase/6/docs/api/java/secu
>>> rity/KeyStore.html#load%28java.io.InputStream,%20char%5B%5D%29 but
>>> could not get answers.
>>>
>>> Any help here is greatly appreciated.
>>>
>>> Thanks.
>>>
>>> --
>>> Raghav
>>>
>>
>>
>>
>> --
>> Raghav
>>
>
>


-- 
Raghav


Re: How to perform keytool operation using Java code

2017-07-12 Thread Raghav
Guys, Would anyone know about it ?

On Tue, Jul 11, 2017 at 6:20 AM, Raghav  wrote:

> Hi
>
> I followed https://kafka.apache.org/documentation/#security to create
> keystore and trust store using Java Keytool. Now, I am looking to do the
> same stuff programmatically using Java. I am struggling to find the right
> Java classes to perform following operations:
>
> 1. How to extract CSR from a keystore using Java classes ?
>
> 2. How to add a CA cert to a keystore using Java classes ?
>
> I tried to following http://docs.oracle.com/javase/6/docs/api/java/
> security/KeyStore.html#load%28java.io.InputStream,%20char%5B%5D%29 but
> could not get answers.
>
> Any help here is greatly appreciated.
>
> Thanks.
>
> --
> Raghav
>



-- 
Raghav


How to perform keytool operation using Java code

2017-07-11 Thread Raghav
Hi

I followed https://kafka.apache.org/documentation/#security to create
keystore and trust store using Java Keytool. Now, I am looking to do the
same stuff programmatically using Java. I am struggling to find the right
Java classes to perform following operations:

1. How to extract CSR from a keystore using Java classes ?

2. How to add a CA cert to a keystore using Java classes ?

I tried to following
http://docs.oracle.com/javase/6/docs/api/java/security/KeyStore.html#load%28java.io.InputStream,%20char%5B%5D%29
but
could not get answers.

Any help here is greatly appreciated.

Thanks.

-- 
Raghav


Re: Kafka Authorization and ACLs Broken

2017-07-05 Thread Raghav
Hi Rajini

Now that 0.11.0 is out, can we use the Admin client ? Are there some
example code for these ?

Thanks.

On Wed, May 24, 2017 at 9:06 PM, Rajini Sivaram 
wrote:

> Hi Raghav,
>
> Yes, you can create ACLs programmatically. Take a look at the use of
> AclCommand.main in https://github.com/apache/kafka/blob/trunk/core/src/
> test/scala/integration/kafka/api/EndToEndAuthorizationTest.scala
>
> If you can wait for the next release 0.11.0 that will be out next month,
> you can use the new Java AdminClient, which allows you to do this in a much
> neater way. Take a look at the interface https://github.com/
> apache/kafka/blob/trunk/clients/src/main/java/org/
> apache/kafka/clients/admin/AdminClient.java
> <https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/AdminClient.java>
>
> If your release is not imminent, then you could build Kafka from the
> 0.11.0 branch and use the new AdminClient. When the release is out, you can
> switch over to the binary release.
>
> Regards,
>
> Rajini
>
>
>
> On Wed, May 24, 2017 at 4:13 PM, Raghav  wrote:
>
>> Hi Rajini
>>
>> Quick question on Configuring ACLs: We used bin/kafka-acls.sh to
>> configure ACL rules, which internally uses Kafka Admin APIs to configure
>> the ACLs.
>>
>> Can I add, remove and list ACLs via zk client libraries ? I want to be
>> able to add, remove, list ACLs via my code rather than using Kafka-acl.sh.
>> Is there a guideline for recommended set of libraries to use to do such
>> operations ?
>>
>> As always thanks so much.
>>
>>
>>
>> On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram 
>> wrote:
>>
>>> Raghav/Darshan,
>>>
>>> Can you try these steps on a clean installation of Kafka? It works for
>>> me, so hopefully it will work for you. And then you can adapt to your
>>> scenario.
>>>
>>> *Create keystores and truststores:*
>>>
>>> keytool -genkey -alias kafka -keystore server.keystore.jks -dname
>>> "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
>>> -keypass server-key-password
>>>
>>> keytool -exportcert -file server-cert-file -keystore server.keystore.jks
>>> -alias kafka -storepass server-keystore-password
>>>
>>> keytool -importcert -file server-cert-file -keystore
>>> server.truststore.jks -alias kafka -storepass server-truststore-password
>>> -noprompt
>>>
>>> keytool -importcert -file server-cert-file -keystore
>>> client.truststore.jks -alias kafkaclient -storepass
>>> client-truststore-password -noprompt
>>>
>>>
>>> keytool -genkey -alias kafkaclient -keystore client.keystore.jks -dname
>>> "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
>>> -keypass client-key-password
>>>
>>> keytool -exportcert -file client-cert-file -keystore client.keystore.jks
>>> -alias kafkaclient -storepass client-keystore-password
>>>
>>> keytool -importcert -file client-cert-file -keystore
>>> server.truststore.jks -alias kafkaclient -storepass
>>> server-truststore-password -noprompt
>>>
>>> *Configure broker: Add these lines at the end of your server.properties*
>>>
>>> listeners=SSL://:9093
>>>
>>> advertised.listeners=SSL://127.0.0.1:9093
>>>
>>> ssl.keystore.location=/tmp/acl/server.keystore.jks
>>>
>>> ssl.keystore.password=server-keystore-password
>>>
>>> ssl.key.password=server-key-password
>>>
>>> ssl.truststore.location=/tmp/acl/server.truststore.jks
>>>
>>> ssl.truststore.password=server-truststore-password
>>>
>>> security.inter.broker.protocol=SSL
>>>
>>> security.protocol=SSL
>>>
>>> ssl.client.auth=required
>>>
>>> allow.everyone.if.no.acl.found=false
>>>
>>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>>>
>>> super.users=User:CN=KafkaBroker,O=Pivotal,C=UK
>>>
>>> *Configure producer: producer.properties*
>>>
>>> security.protocol=SSL
>>>
>>> ssl.truststore.location=/tmp/acl/client.truststore.jks
>>>
>>> ssl.truststore.password=client-truststore-password
>>>
>>> ssl.keystore.location=/tmp/acl/client.keystore.jks
>>>
>>> ssl.keystore.password=client-keystore-password
>>>
>>> ssl.key.password=client-key-password
>>>
>>>
>>> *Configure con

Re: Kafka 0.11.0 release

2017-06-23 Thread Raghav
Thanks for the update, Guozhang.

On Thu, Jun 22, 2017 at 9:52 PM, Guozhang Wang  wrote:

> Raghav,
>
> We are going through the voting process now, expecting to have another RC
> and release in a few more days.
>
>
> Guozhang
>
> On Thu, Jun 22, 2017 at 3:59 AM, Raghav  wrote:
>
> > Hi
> >
> > Would anyone know when is the Kafka 0.11.0 scheduled to be released ?
> >
> > Thanks.
> >
> > --
> > Raghav
> >
>
>
>
> --
> -- Guozhang
>



-- 
Raghav


Kafka 0.11.0 release

2017-06-22 Thread Raghav
Hi

Would anyone know when is the Kafka 0.11.0 scheduled to be released ?

Thanks.

-- 
Raghav


Re: advertised.listeners

2017-05-31 Thread Raghav
Hello Darshan

Have you tried SSL://0.0.0.0:9093 ?

Rajani had suggested something similar to me a week back while I was trying
to get a ACL based setup.

Thanks.

On Wed, May 31, 2017 at 8:58 AM, Darshan 
wrote:

> Hi
>
> Our Kafka broker has two IPs on two different interfaces.
>
> eth0 has 172.x.x.x for external leg
> eth1 has 1.x.x.x for internal leg
>
>
> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
> subnet.
>
> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
> producer the message, but Consumer cannot receive the message.
>
> What value should we use for advertised.listeners so that Producer can
> write and Consumers can read ?
>
> Thanks.
>



-- 
Raghav


Re: Java APIs for ZooKeeper related operations

2017-05-30 Thread Raghav
Hans

When will this version (0.11) be available ?

On Tue, May 30, 2017 at 3:54 PM, Hans Jespersen  wrote:

> Probably important to read and understand these enhancements coming in 0.11
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-117%3A+Add+a+public+
> AdminClient+API+for+Kafka+admin+operations
>
> -hans
>
> /**
>  * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
>  * h...@confluent.io (650)924-2670
>  */
>
> On Tue, May 30, 2017 at 3:50 PM, Mohammed Manna 
> wrote:
>
> > 1) For issue no. 1 I think you might find the AdminUtils useful This link
> > <https://stackoverflow.com/questions/16946778/how-can-we-
> > create-a-topic-in-kafka-from-the-ide-using-api>
> > should
> > help you understand.
> >
> > I haven't got around using ACL for Kafka yet (as I am still doing PoC
> > myself) - so probably some other power user can chime in?
> >
> > KR,
> >
> > On 30 May 2017 at 23:35, Raghav  wrote:
> >
> > > Hi
> > >
> > > I want to know if there are Java APIs for the following. I want to be
> > able
> > > to do these things programmatically in our Kafka cluster. Using command
> > > line tools seems a bit hacky. Please advise the right way to do, and
> any
> > > pointers to Library in Java, Python or Go.
> > >
> > > 1. Creating topic with a given replication and partition.
> > > 2. Push ACLs into Kafka Cluster
> > > 3. Get existing ACL info from Kafka Cluster
> > >
> > > Thanks.
> > >
> > > Raghav
> > >
> >
>



-- 
Raghav


Java APIs for ZooKeeper related operations

2017-05-30 Thread Raghav
Hi

I want to know if there are Java APIs for the following. I want to be able
to do these things programmatically in our Kafka cluster. Using command
line tools seems a bit hacky. Please advise the right way to do, and any
pointers to Library in Java, Python or Go.

1. Creating topic with a given replication and partition.
2. Push ACLs into Kafka Cluster
3. Get existing ACL info from Kafka Cluster

Thanks.

Raghav


Re: Kafka Authorization and ACLs Broken

2017-05-26 Thread Raghav
Hi Alex

In fact I copied the same configuration that Rajini pasted above and it
worked for me. You can try the same. Let me know if it doesn't work.

Thanks.

On Fri, May 26, 2017 at 4:19 AM, Kamalov, Alex 
wrote:

> Hey Raghav,
>
>
>
> Yes, I would very much love to get your configs, so I can model against it.
>
>
>
> Thanks again,
>
>
>
> Alex
>
>
>
> *From: *Raghav 
> *Date: *Thursday, May 25, 2017 at 10:54 PM
> *To: *Mike Marzo 
> *Cc: *Darshan Purandare , Rajini Sivaram <
> rajinisiva...@gmail.com>, Users , Alex Kamalov <
> alex.kama...@bnymellon.com>
> *Subject: *Re: Kafka Authorization and ACLs Broken
>
>
>
> In SSL cert, there is a field which has a CN (Common Name). So when ACLs
> are set, they are set for that CN. This is how the ACLs are configured and
> matched against. I am still pretty new to Kafka in general, but this is how
> I think it works. I can copy my config if you want.
>
>
>
> On Thu, May 25, 2017 at 12:51 PM, Mike Marzo <
> precisionarchery...@gmail.com> wrote:
>
> Stupid question
>
> If u don't specify a jaas file how does the consumer and producer specify
> the Id that acl's are configured against   boy I am getting more and
> more perplexed by this...
>
> mike marzo
> 908 209-4484 <(908)%20209-4484>
>
>
>
> On May 24, 2017 9:29 PM, "Raghav"  wrote:
>
> Mike
>
>
>
> I am not using jaas file. I literally took the config Rajini gave in the
> previous email and it worked for me. I am using ssl Kafka with ACLs. I am
> not suing kerberos.
>
>
>
> Thanks.
>
>
>
> On Wed, May 24, 2017 at 11:29 AM, Mike Marzo <
> precisionarchery...@gmail.com> wrote:
>
> I'm also having issues getting acls to work.  Out of intereat, are you
> starting ur brokers with a jaas file, if so do u mind sharing the client
> and server side jaas entries so I can validate what I'm doing.
>
> mike marzo
> 908 209-4484
>
> On May 24, 2017 10:54 AM, "Raghav"  wrote:
>
> > Hi Rajini
> >
> > Thank you very much. It perfectly works.
> >
> > I think in my setup I was trying to use a CA (certificate authority) to
> > sign the certificates from client and server, and then adding it to trust
> > store and keystore. I think in that process, I may have messed
> something. I
> > will try above config with a CA to sign certificates. Hopefully that
> would
> > work too.
> >
> > Thanks a lot again.
> >
> > Raghav
> >
> >
> >
> >
> > On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram  >
> > wrote:
> >
> > > Raghav/Darshan,
> > >
> > > Can you try these steps on a clean installation of Kafka? It works for
> > me,
> > > so hopefully it will work for you. And then you can adapt to your
> > scenario.
> > >
> > > *Create keystores and truststores:*
> > >
> > > keytool -genkey -alias kafka -keystore server.keystore.jks -dname
> > > "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
> > > -keypass server-key-password
> > >
> > > keytool -exportcert -file server-cert-file -keystore
> server.keystore.jks
> > > -alias kafka -storepass server-keystore-password
> > >
> > > keytool -importcert -file server-cert-file -keystore
> > server.truststore.jks
> > > -alias kafka -storepass server-truststore-password -noprompt
> > >
> > > keytool -importcert -file server-cert-file -keystore
> > client.truststore.jks
> > > -alias kafkaclient -storepass client-truststore-password -noprompt
> > >
> > >
> > > keytool -genkey -alias kafkaclient -keystore client.keystore.jks -dname
> > > "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
> > > -keypass client-key-password
> > >
> > > keytool -exportcert -file client-cert-file -keystore
> client.keystore.jks
> > > -alias kafkaclient -storepass client-keystore-password
> > >
> > > keytool -importcert -file client-cert-file -keystore
> > server.truststore.jks
> > > -alias kafkaclient -storepass server-truststore-password -noprompt
> > >
> > > *Configure broker: Add these lines at the end of your
> server.properties*
> > >
> > > listeners=SSL://:9093
> > >
> > > advertised.listeners=SSL://127.0.0.1:9093
> > >
> > > ssl.keystore.location=/tmp/acl/server.keystore.jks
> > >
> > > ssl.keystore.password=server-keystore-password
> > 

Re: Kafka Authorization and ACLs Broken

2017-05-25 Thread Raghav
In SSL cert, there is a field which has a CN (Common Name). So when ACLs
are set, they are set for that CN. This is how the ACLs are configured and
matched against. I am still pretty new to Kafka in general, but this is how
I think it works. I can copy my config if you want.

On Thu, May 25, 2017 at 12:51 PM, Mike Marzo 
wrote:

> Stupid question
> If u don't specify a jaas file how does the consumer and producer specify
> the Id that acl's are configured against   boy I am getting more and
> more perplexed by this...
>
> mike marzo
> 908 209-4484 <(908)%20209-4484>
>
> On May 24, 2017 9:29 PM, "Raghav"  wrote:
>
>> Mike
>>
>> I am not using jaas file. I literally took the config Rajini gave in the
>> previous email and it worked for me. I am using ssl Kafka with ACLs. I am
>> not suing kerberos.
>>
>> Thanks.
>>
>> On Wed, May 24, 2017 at 11:29 AM, Mike Marzo <
>> precisionarchery...@gmail.com> wrote:
>>
>>> I'm also having issues getting acls to work.  Out of intereat, are you
>>> starting ur brokers with a jaas file, if so do u mind sharing the client
>>> and server side jaas entries so I can validate what I'm doing.
>>>
>>> mike marzo
>>> 908 209-4484
>>>
>>> On May 24, 2017 10:54 AM, "Raghav"  wrote:
>>>
>>> > Hi Rajini
>>> >
>>> > Thank you very much. It perfectly works.
>>> >
>>> > I think in my setup I was trying to use a CA (certificate authority) to
>>> > sign the certificates from client and server, and then adding it to
>>> trust
>>> > store and keystore. I think in that process, I may have messed
>>> something. I
>>> > will try above config with a CA to sign certificates. Hopefully that
>>> would
>>> > work too.
>>> >
>>> > Thanks a lot again.
>>> >
>>> > Raghav
>>> >
>>> >
>>> >
>>> >
>>> > On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram <
>>> rajinisiva...@gmail.com>
>>> > wrote:
>>> >
>>> > > Raghav/Darshan,
>>> > >
>>> > > Can you try these steps on a clean installation of Kafka? It works
>>> for
>>> > me,
>>> > > so hopefully it will work for you. And then you can adapt to your
>>> > scenario.
>>> > >
>>> > > *Create keystores and truststores:*
>>> > >
>>> > > keytool -genkey -alias kafka -keystore server.keystore.jks -dname
>>> > > "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
>>> > > -keypass server-key-password
>>> > >
>>> > > keytool -exportcert -file server-cert-file -keystore
>>> server.keystore.jks
>>> > > -alias kafka -storepass server-keystore-password
>>> > >
>>> > > keytool -importcert -file server-cert-file -keystore
>>> > server.truststore.jks
>>> > > -alias kafka -storepass server-truststore-password -noprompt
>>> > >
>>> > > keytool -importcert -file server-cert-file -keystore
>>> > client.truststore.jks
>>> > > -alias kafkaclient -storepass client-truststore-password -noprompt
>>> > >
>>> > >
>>> > > keytool -genkey -alias kafkaclient -keystore client.keystore.jks
>>> -dname
>>> > > "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
>>> > > -keypass client-key-password
>>> > >
>>> > > keytool -exportcert -file client-cert-file -keystore
>>> client.keystore.jks
>>> > > -alias kafkaclient -storepass client-keystore-password
>>> > >
>>> > > keytool -importcert -file client-cert-file -keystore
>>> > server.truststore.jks
>>> > > -alias kafkaclient -storepass server-truststore-password -noprompt
>>> > >
>>> > > *Configure broker: Add these lines at the end of your
>>> server.properties*
>>> > >
>>> > > listeners=SSL://:9093
>>> > >
>>> > > advertised.listeners=SSL://127.0.0.1:9093
>>> > >
>>> > > ssl.keystore.location=/tmp/acl/server.keystore.jks
>>> > >
>>> > > ssl.keystore.password=server-keystore-password
>>> > >
>>> > > ssl.key.password=server-key-password
>>> > >
>&

Re: Kafka Configuration Question

2017-05-25 Thread Raghav
Looks like you missed attachment of your server.properties file.

On Wed, May 24, 2017 at 10:25 PM, Bennett, Conrad <
conrad.benn...@verizonwireless.com.invalid> wrote:

> Hello,
>
> I’m hoping someone could provide me with some assistance please. I am in
> the process of attempting to standing up a Kafka cluster and I have 7 nodes
> all of which has kafka and zookeeper installed. I have attached my
> server.properties file to verify whether or not I have anything
> misconfigured but each time I try to start the kafka service it fails with
> the error timed out connecting to zookeeper but the zookeeper process is up
> and running. Also during my research I read in order to achieve better
> performance separate drives for kafka data should be configure, but in the
> configuration file I didn’t understand where exactly that should be
> configure. Any assistance would be greatly appreciated. Thanks in advance
>
> kafka: { version: 0.10.1.1 }
>
> zkper: { version: 3.4.9 }
>
> Conrad Bennett Jr.
>
>


-- 
Raghav


Re: Kafka Authorization and ACLs Broken

2017-05-24 Thread Raghav
I initially tried kerberos, but it felt too complicated, so gave up and
only tried SSL.

On Wed, May 24, 2017 at 7:47 PM, Mike Marzo 
wrote:

> Thanks.  We will try it.  Struggling with krb5 and acls
>
> mike marzo
> 908 209-4484 <(908)%20209-4484>
>
> On May 24, 2017 9:29 PM, "Raghav"  wrote:
>
>> Mike
>>
>> I am not using jaas file. I literally took the config Rajini gave in the
>> previous email and it worked for me. I am using ssl Kafka with ACLs. I am
>> not suing kerberos.
>>
>> Thanks.
>>
>> On Wed, May 24, 2017 at 11:29 AM, Mike Marzo <
>> precisionarchery...@gmail.com> wrote:
>>
>>> I'm also having issues getting acls to work.  Out of intereat, are you
>>> starting ur brokers with a jaas file, if so do u mind sharing the client
>>> and server side jaas entries so I can validate what I'm doing.
>>>
>>> mike marzo
>>> 908 209-4484
>>>
>>> On May 24, 2017 10:54 AM, "Raghav"  wrote:
>>>
>>> > Hi Rajini
>>> >
>>> > Thank you very much. It perfectly works.
>>> >
>>> > I think in my setup I was trying to use a CA (certificate authority) to
>>> > sign the certificates from client and server, and then adding it to
>>> trust
>>> > store and keystore. I think in that process, I may have messed
>>> something. I
>>> > will try above config with a CA to sign certificates. Hopefully that
>>> would
>>> > work too.
>>> >
>>> > Thanks a lot again.
>>> >
>>> > Raghav
>>> >
>>> >
>>> >
>>> >
>>> > On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram <
>>> rajinisiva...@gmail.com>
>>> > wrote:
>>> >
>>> > > Raghav/Darshan,
>>> > >
>>> > > Can you try these steps on a clean installation of Kafka? It works
>>> for
>>> > me,
>>> > > so hopefully it will work for you. And then you can adapt to your
>>> > scenario.
>>> > >
>>> > > *Create keystores and truststores:*
>>> > >
>>> > > keytool -genkey -alias kafka -keystore server.keystore.jks -dname
>>> > > "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
>>> > > -keypass server-key-password
>>> > >
>>> > > keytool -exportcert -file server-cert-file -keystore
>>> server.keystore.jks
>>> > > -alias kafka -storepass server-keystore-password
>>> > >
>>> > > keytool -importcert -file server-cert-file -keystore
>>> > server.truststore.jks
>>> > > -alias kafka -storepass server-truststore-password -noprompt
>>> > >
>>> > > keytool -importcert -file server-cert-file -keystore
>>> > client.truststore.jks
>>> > > -alias kafkaclient -storepass client-truststore-password -noprompt
>>> > >
>>> > >
>>> > > keytool -genkey -alias kafkaclient -keystore client.keystore.jks
>>> -dname
>>> > > "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
>>> > > -keypass client-key-password
>>> > >
>>> > > keytool -exportcert -file client-cert-file -keystore
>>> client.keystore.jks
>>> > > -alias kafkaclient -storepass client-keystore-password
>>> > >
>>> > > keytool -importcert -file client-cert-file -keystore
>>> > server.truststore.jks
>>> > > -alias kafkaclient -storepass server-truststore-password -noprompt
>>> > >
>>> > > *Configure broker: Add these lines at the end of your
>>> server.properties*
>>> > >
>>> > > listeners=SSL://:9093
>>> > >
>>> > > advertised.listeners=SSL://127.0.0.1:9093
>>> > >
>>> > > ssl.keystore.location=/tmp/acl/server.keystore.jks
>>> > >
>>> > > ssl.keystore.password=server-keystore-password
>>> > >
>>> > > ssl.key.password=server-key-password
>>> > >
>>> > > ssl.truststore.location=/tmp/acl/server.truststore.jks
>>> > >
>>> > > ssl.truststore.password=server-truststore-password
>>> > >
>>> > > security.inter.broker.protocol=SSL
>>> > >
>>> > > security.protocol=SSL
>>> > >
>>

Re: Kafka Authorization and ACLs Broken

2017-05-24 Thread Raghav
Mike

I am not using jaas file. I literally took the config Rajini gave in the
previous email and it worked for me. I am using ssl Kafka with ACLs. I am
not suing kerberos.

Thanks.

On Wed, May 24, 2017 at 11:29 AM, Mike Marzo 
wrote:

> I'm also having issues getting acls to work.  Out of intereat, are you
> starting ur brokers with a jaas file, if so do u mind sharing the client
> and server side jaas entries so I can validate what I'm doing.
>
> mike marzo
> 908 209-4484
>
> On May 24, 2017 10:54 AM, "Raghav"  wrote:
>
> > Hi Rajini
> >
> > Thank you very much. It perfectly works.
> >
> > I think in my setup I was trying to use a CA (certificate authority) to
> > sign the certificates from client and server, and then adding it to trust
> > store and keystore. I think in that process, I may have messed
> something. I
> > will try above config with a CA to sign certificates. Hopefully that
> would
> > work too.
> >
> > Thanks a lot again.
> >
> > Raghav
> >
> >
> >
> >
> > On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram  >
> > wrote:
> >
> > > Raghav/Darshan,
> > >
> > > Can you try these steps on a clean installation of Kafka? It works for
> > me,
> > > so hopefully it will work for you. And then you can adapt to your
> > scenario.
> > >
> > > *Create keystores and truststores:*
> > >
> > > keytool -genkey -alias kafka -keystore server.keystore.jks -dname
> > > "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
> > > -keypass server-key-password
> > >
> > > keytool -exportcert -file server-cert-file -keystore
> server.keystore.jks
> > > -alias kafka -storepass server-keystore-password
> > >
> > > keytool -importcert -file server-cert-file -keystore
> > server.truststore.jks
> > > -alias kafka -storepass server-truststore-password -noprompt
> > >
> > > keytool -importcert -file server-cert-file -keystore
> > client.truststore.jks
> > > -alias kafkaclient -storepass client-truststore-password -noprompt
> > >
> > >
> > > keytool -genkey -alias kafkaclient -keystore client.keystore.jks -dname
> > > "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
> > > -keypass client-key-password
> > >
> > > keytool -exportcert -file client-cert-file -keystore
> client.keystore.jks
> > > -alias kafkaclient -storepass client-keystore-password
> > >
> > > keytool -importcert -file client-cert-file -keystore
> > server.truststore.jks
> > > -alias kafkaclient -storepass server-truststore-password -noprompt
> > >
> > > *Configure broker: Add these lines at the end of your
> server.properties*
> > >
> > > listeners=SSL://:9093
> > >
> > > advertised.listeners=SSL://127.0.0.1:9093
> > >
> > > ssl.keystore.location=/tmp/acl/server.keystore.jks
> > >
> > > ssl.keystore.password=server-keystore-password
> > >
> > > ssl.key.password=server-key-password
> > >
> > > ssl.truststore.location=/tmp/acl/server.truststore.jks
> > >
> > > ssl.truststore.password=server-truststore-password
> > >
> > > security.inter.broker.protocol=SSL
> > >
> > > security.protocol=SSL
> > >
> > > ssl.client.auth=required
> > >
> > > allow.everyone.if.no.acl.found=false
> > >
> > > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> > >
> > > super.users=User:CN=KafkaBroker,O=Pivotal,C=UK
> > >
> > > *Configure producer: producer.properties*
> > >
> > > security.protocol=SSL
> > >
> > > ssl.truststore.location=/tmp/acl/client.truststore.jks
> > >
> > > ssl.truststore.password=client-truststore-password
> > >
> > > ssl.keystore.location=/tmp/acl/client.keystore.jks
> > >
> > > ssl.keystore.password=client-keystore-password
> > >
> > > ssl.key.password=client-key-password
> > >
> > >
> > > *Configure consumer: consumer.properties*
> > >
> > > security.protocol=SSL
> > >
> > > ssl.truststore.location=/tmp/acl/client.truststore.jks
> > >
> > > ssl.truststore.password=client-truststore-password
> > >
> > > ssl.keystore.location=/tmp/acl/client.keystore.jks
> > >
> > > ssl.keystore.password=client-keystore-password
> > >
&g

Re: Kafka Authorization and ACLs Broken

2017-05-24 Thread Raghav
Hi Rajini

Thank you very much. It perfectly works.

I think in my setup I was trying to use a CA (certificate authority) to
sign the certificates from client and server, and then adding it to trust
store and keystore. I think in that process, I may have messed something. I
will try above config with a CA to sign certificates. Hopefully that would
work too.

Thanks a lot again.

Raghav




On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram 
wrote:

> Raghav/Darshan,
>
> Can you try these steps on a clean installation of Kafka? It works for me,
> so hopefully it will work for you. And then you can adapt to your scenario.
>
> *Create keystores and truststores:*
>
> keytool -genkey -alias kafka -keystore server.keystore.jks -dname
> "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
> -keypass server-key-password
>
> keytool -exportcert -file server-cert-file -keystore server.keystore.jks
> -alias kafka -storepass server-keystore-password
>
> keytool -importcert -file server-cert-file -keystore server.truststore.jks
> -alias kafka -storepass server-truststore-password -noprompt
>
> keytool -importcert -file server-cert-file -keystore client.truststore.jks
> -alias kafkaclient -storepass client-truststore-password -noprompt
>
>
> keytool -genkey -alias kafkaclient -keystore client.keystore.jks -dname
> "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
> -keypass client-key-password
>
> keytool -exportcert -file client-cert-file -keystore client.keystore.jks
> -alias kafkaclient -storepass client-keystore-password
>
> keytool -importcert -file client-cert-file -keystore server.truststore.jks
> -alias kafkaclient -storepass server-truststore-password -noprompt
>
> *Configure broker: Add these lines at the end of your server.properties*
>
> listeners=SSL://:9093
>
> advertised.listeners=SSL://127.0.0.1:9093
>
> ssl.keystore.location=/tmp/acl/server.keystore.jks
>
> ssl.keystore.password=server-keystore-password
>
> ssl.key.password=server-key-password
>
> ssl.truststore.location=/tmp/acl/server.truststore.jks
>
> ssl.truststore.password=server-truststore-password
>
> security.inter.broker.protocol=SSL
>
> security.protocol=SSL
>
> ssl.client.auth=required
>
> allow.everyone.if.no.acl.found=false
>
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>
> super.users=User:CN=KafkaBroker,O=Pivotal,C=UK
>
> *Configure producer: producer.properties*
>
> security.protocol=SSL
>
> ssl.truststore.location=/tmp/acl/client.truststore.jks
>
> ssl.truststore.password=client-truststore-password
>
> ssl.keystore.location=/tmp/acl/client.keystore.jks
>
> ssl.keystore.password=client-keystore-password
>
> ssl.key.password=client-key-password
>
>
> *Configure consumer: consumer.properties*
>
> security.protocol=SSL
>
> ssl.truststore.location=/tmp/acl/client.truststore.jks
>
> ssl.truststore.password=client-truststore-password
>
> ssl.keystore.location=/tmp/acl/client.keystore.jks
>
> ssl.keystore.password=client-keystore-password
>
> ssl.key.password=client-key-password
>
> group.id=testgroup
>
> *Create topic:*
>
> bin/kafka-topics.sh  --zookeeper localhost --create --topic testtopic
> --replication-factor 1 --partitions 1
>
>
> *Configure ACLs:*
>
> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
> --add --allow-principal "User:CN=KafkaClient,O=Pivotal,C=UK" --producer
> --topic testtopic
>
> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
> --add --allow-principal "User:CN=KafkaClient,O=Pivotal,C=UK" --consumer
> --topic testtopic --group test group
>
>
> *Run console producer and type in some messages:*
>
> bin/kafka-console-producer.sh  --producer.config
> /tmp/acl/producer.properties --topic testtopic --broker-list
> 127.0.0.1:9093
>
>
> *Run console consumer, you should see messages from above:*
>
> bin/kafka-console-consumer.sh  --consumer.config
> /tmp/acl/consumer.properties --topic testtopic --bootstrap-server
> 127.0.0.1:9093 --from-beginning
>
>
>
> On Tue, May 23, 2017 at 12:57 PM, Raghav  wrote:
>
>> Darshan,
>>
>> I have not yet successfully gotten the ACLs to work in Kafka. I am still
>> looking for help. I will update this email thread if I do find. In case
>> you
>> get it working, please let me know.
>>
>> Thanks.
>>
>> R
>>
>> On Tue, May 23, 2017 at 8:49 AM, Darshan Purandare <
>> purandare.dars...@gmail.com> wrote:
>>
>> > Raghav
>> >
>> > I saw few posts of yours around Kafka ACLs and the 

Re: Kafka Authorization and ACLs Broken

2017-05-24 Thread Raghav
Rajini

I will try and report to you shortly. Many thanks.

Raghav

On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram 
wrote:

> Raghav/Darshan,
>
> Can you try these steps on a clean installation of Kafka? It works for me,
> so hopefully it will work for you. And then you can adapt to your scenario.
>
> *Create keystores and truststores:*
>
> keytool -genkey -alias kafka -keystore server.keystore.jks -dname
> "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
> -keypass server-key-password
>
> keytool -exportcert -file server-cert-file -keystore server.keystore.jks
> -alias kafka -storepass server-keystore-password
>
> keytool -importcert -file server-cert-file -keystore server.truststore.jks
> -alias kafka -storepass server-truststore-password -noprompt
>
> keytool -importcert -file server-cert-file -keystore client.truststore.jks
> -alias kafkaclient -storepass client-truststore-password -noprompt
>
>
> keytool -genkey -alias kafkaclient -keystore client.keystore.jks -dname
> "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
> -keypass client-key-password
>
> keytool -exportcert -file client-cert-file -keystore client.keystore.jks
> -alias kafkaclient -storepass client-keystore-password
>
> keytool -importcert -file client-cert-file -keystore server.truststore.jks
> -alias kafkaclient -storepass server-truststore-password -noprompt
>
> *Configure broker: Add these lines at the end of your server.properties*
>
> listeners=SSL://:9093
>
> advertised.listeners=SSL://127.0.0.1:9093
>
> ssl.keystore.location=/tmp/acl/server.keystore.jks
>
> ssl.keystore.password=server-keystore-password
>
> ssl.key.password=server-key-password
>
> ssl.truststore.location=/tmp/acl/server.truststore.jks
>
> ssl.truststore.password=server-truststore-password
>
> security.inter.broker.protocol=SSL
>
> security.protocol=SSL
>
> ssl.client.auth=required
>
> allow.everyone.if.no.acl.found=false
>
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>
> super.users=User:CN=KafkaBroker,O=Pivotal,C=UK
>
> *Configure producer: producer.properties*
>
> security.protocol=SSL
>
> ssl.truststore.location=/tmp/acl/client.truststore.jks
>
> ssl.truststore.password=client-truststore-password
>
> ssl.keystore.location=/tmp/acl/client.keystore.jks
>
> ssl.keystore.password=client-keystore-password
>
> ssl.key.password=client-key-password
>
>
> *Configure consumer: consumer.properties*
>
> security.protocol=SSL
>
> ssl.truststore.location=/tmp/acl/client.truststore.jks
>
> ssl.truststore.password=client-truststore-password
>
> ssl.keystore.location=/tmp/acl/client.keystore.jks
>
> ssl.keystore.password=client-keystore-password
>
> ssl.key.password=client-key-password
>
> group.id=testgroup
>
> *Create topic:*
>
> bin/kafka-topics.sh  --zookeeper localhost --create --topic testtopic
> --replication-factor 1 --partitions 1
>
>
> *Configure ACLs:*
>
> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
> --add --allow-principal "User:CN=KafkaClient,O=Pivotal,C=UK" --producer
> --topic testtopic
>
> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
> --add --allow-principal "User:CN=KafkaClient,O=Pivotal,C=UK" --consumer
> --topic testtopic --group test group
>
>
> *Run console producer and type in some messages:*
>
> bin/kafka-console-producer.sh  --producer.config
> /tmp/acl/producer.properties --topic testtopic --broker-list
> 127.0.0.1:9093
>
>
> *Run console consumer, you should see messages from above:*
>
> bin/kafka-console-consumer.sh  --consumer.config
> /tmp/acl/consumer.properties --topic testtopic --bootstrap-server
> 127.0.0.1:9093 --from-beginning
>
>
>
> On Tue, May 23, 2017 at 12:57 PM, Raghav  wrote:
>
>> Darshan,
>>
>> I have not yet successfully gotten the ACLs to work in Kafka. I am still
>> looking for help. I will update this email thread if I do find. In case
>> you
>> get it working, please let me know.
>>
>> Thanks.
>>
>> R
>>
>> On Tue, May 23, 2017 at 8:49 AM, Darshan Purandare <
>> purandare.dars...@gmail.com> wrote:
>>
>> > Raghav
>> >
>> > I saw few posts of yours around Kafka ACLs and the problems. I have seen
>> > similar issues where Writer has not been able to write to any topic. I
>> have
>> > seen "leader not available" and sometimes "unknown topic or partition",
>> and
>> > "topic_authorization_failed" error.
&

Re: Kafka Authorization and ACLs Broken

2017-05-23 Thread Raghav
Darshan,

I have not yet successfully gotten the ACLs to work in Kafka. I am still
looking for help. I will update this email thread if I do find. In case you
get it working, please let me know.

Thanks.

R

On Tue, May 23, 2017 at 8:49 AM, Darshan Purandare <
purandare.dars...@gmail.com> wrote:

> Raghav
>
> I saw few posts of yours around Kafka ACLs and the problems. I have seen
> similar issues where Writer has not been able to write to any topic. I have
> seen "leader not available" and sometimes "unknown topic or partition", and
> "topic_authorization_failed" error.
>
> Let me know if you find a valid config that works.
>
> Thanks.
>
>
>
> On Tue, May 23, 2017 at 8:44 AM, Raghav  wrote:
>
>> Hello Kafka Users
>>
>> I am a new Kafka user and trying to make Kafka SSL work with Authorization
>> and ACLs. I followed posts from Kafka and Confluent docs exactly to the
>> point but my producer cannot write to kafka broker. I get
>> "LEADER_NOT_FOUND" errors. And even Consumer throws the same errors.
>>
>> Can someone please share their config which worked with ACLs.
>>
>> Here is my config. Please help.
>>
>> server.properties config
>> 
>> 
>> broker.id=0
>> auto.create.topics.enable=true
>> delete.topic.enable=true
>>
>> listeners=PLAINTEXT://kafka1.example.com:9092
>> <http://kafka-dev1.example.com:9092/>,SSL://kafka1.example.com:9093
>> <http://kafka-dev1.example.com:9093/>
>> host.name=kafka1.example.com <http://kafka-dev1.example.com/>
>>
>>
>>
>> ssl.keystore.location=/var/private/kafka1.keystore.jks
>> ssl.keystore.password=12345678
>> ssl.key.password=12345678
>>
>> ssl.truststore.location=/var/private/kafka1.truststore.jks
>> ssl.truststore.password=12345678
>>
>> ssl.client.auth=required
>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>> ssl.keystore.type=JKS
>> ssl.truststore.type=JKS
>>
>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>> 
>> 
>>
>>
>>
>> Here is producer Config(producer.properties)
>> 
>> 
>> security.protocol=SSL
>> ssl.truststore.location=/var/private/kafka2.truststore.jks
>> ssl.truststore.password=12345678
>>
>> ssl.keystore.location=/var/private/kafka2.keystore.jks
>> ssl.keystore.password=12345678
>> ssl.key.password=12345678
>>
>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>> ssl.truststore.type=JKS
>> ssl.keystore.type=JKS
>>
>> 
>> 
>>
>>
>> Raqhav
>>
>
>


-- 
Raghav


Kafka Authorization and ACLs Broken

2017-05-23 Thread Raghav
Hello Kafka Users

I am a new Kafka user and trying to make Kafka SSL work with Authorization
and ACLs. I followed posts from Kafka and Confluent docs exactly to the
point but my producer cannot write to kafka broker. I get
"LEADER_NOT_FOUND" errors. And even Consumer throws the same errors.

Can someone please share their config which worked with ACLs.

Here is my config. Please help.

server.properties config


broker.id=0
auto.create.topics.enable=true
delete.topic.enable=true

listeners=PLAINTEXT://kafka1.example.com:9092
,SSL://kafka1.example.com:9093

host.name=kafka1.example.com 


ssl.keystore.location=/var/private/kafka1.keystore.jks
ssl.keystore.password=12345678
ssl.key.password=12345678

ssl.truststore.location=/var/private/kafka1.truststore.jks
ssl.truststore.password=12345678

ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer





Here is producer Config(producer.properties)


security.protocol=SSL
ssl.truststore.location=/var/private/kafka2.truststore.jks
ssl.truststore.password=12345678

ssl.keystore.location=/var/private/kafka2.keystore.jks
ssl.keystore.password=12345678
ssl.key.password=12345678

ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.truststore.type=JKS
ssl.keystore.type=JKS





Raqhav


Re: ACL with SSL is not working

2017-05-22 Thread Raghav
Hi Rajini

Thanks for input. I think I may have done mistake in granting Create access
to Kafka-cluser. I did as follows, please correct me if this is not right:

[root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
zookeeper.connect=kafka1.example.com:2181 --add --allow-principal
User:CN=kafka1 --operation Create --cluster kafka-cluster

Adding ACLs for resource `Cluster:kafka-cluster`:
User:CN=kafka1 has Allow permission for operations: Create from
hosts: *

Current ACLs for resource `Cluster:kafka-cluster`:
User:CN=kafka1 has Allow permission for operations: Create from
hosts: *

[root@kafka1 KAFKA]#

Thanks.


On Mon, May 22, 2017 at 8:02 AM, Rajini Sivaram 
wrote:

> If you are using auto-create of topics, you also need to grant Create
> access to kaka-cluster.
>
> On Mon, May 22, 2017 at 9:51 AM, Raghav  wrote:
>
> > Hi Rajini
> >
> > I tried again with IP addresses this time, and I get the following error
> > log for the given ACLS. Is there something wrong in the way I am giving
> > user name ?
> >
> > *List of ACL*
> >
> > [root@kafka-dev1 KAFKA]# bin/kafka-acls --authorizer-properties
> > zookeeper.connect=localhost:2181 --add --allow-principal User:CN=kafka2
> > --allow-host 10.10.0.23 --operation Read --operation Write --topic
> > kafka-testtopic
> > Adding ACLs for resource `Topic:kafka-testtopic`:
> > User:CN=kafka2 has Allow permission for operations: Read from
> > hosts: 10.10.0.23
> > User:CN=kafka2 has Allow permission for operations: Write from
> > hosts: 10.10.0.23
> > [root@kafka-dev1 KAFKA]#
> >
> > *Authorizer LOGS*
> >
> > [2017-05-22 06:45:44,520] DEBUG No acl found for resource
> > Cluster:kafka-cluster, authorized = false (kafka.authorizer.logger)
> > [2017-05-22 06:45:44,520] DEBUG Principal = User:CN=kafka2 is Denied
> > Operation = Create from host = 10.10.0.23 on resource =
> > Cluster:kafka-cluster (kafka.authorizer.logger)
> >
> > On Mon, May 22, 2017 at 6:34 AM, Rajini Sivaram  >
> > wrote:
> >
> > > Raghav,
> > >
> > > I don't believe we do reverse DNS lookup for matching ACL hosts. Have
> you
> > > tried defining ACLs with host IP address?
> > >
> > > On Mon, May 22, 2017 at 9:19 AM, Raghav  wrote:
> > >
> > > > Hi
> > > >
> > > > I enabled the DEBUG logs on Kafka authorizer, and I see the following
> > > logs
> > > > for the given ACLs. Am I missing something in my config here ? Any
> help
> > > is
> > > > greatly appreciated. Thanks.
> > > >
> > > >
> > > > *List of ACL*
> > > >
> > > > [root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
> > > > zookeeper.connect=localhost:2181 --list --topic kafka-testtopic
> > > > Current ACLs for resource `Topic:kafka-testtopic`:
> > > > User:* has Allow permission for operations: Read from hosts:
> > bin
> > > > User:CN=kafka2 has Allow permission for operations: Write
> from
> > > > hosts: kafka2.example.com
> > > > User:CN=kafka2 has Allow permission for operations: Read from
> > > > hosts: kafka2.example.com
> > > > [root@kafka1 KAFKA]#
> > > >
> > > >
> > > > *Authorizer LOGS*
> > > >
> > > > [2017-05-22 06:10:16,635] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > > [2017-05-22 06:10:16,736] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > > [2017-05-22 06:10:16,839] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > > [2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > >
> > > >
> > > > Thanks.
> > > >
> > > >
> > > > On Sun, May 21, 2017 at 10:52 PM, Raghav 
> > wrote:
> > > >
> > > > > I tried all possible ways (including the way you suggested
> Michael),
> > > but
> > > > I
> > > > > still get the same 

Re: ACL with SSL is not working

2017-05-22 Thread Raghav
Rajini

I tried to add permission for Kafka broker to write. Now I get this error.
Am I missing anything else ?

[2017-05-22 11:11:15,065] WARN Error while fetching metadata with
correlation id 1 : {kafka-testtopic=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)
[2017-05-22 11:11:15,167] WARN Error while fetching metadata with
correlation id 2 : {kafka-testtopic=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)
[2017-05-22 11:11:15,271] WARN Error while fetching metadata with
correlation id 3 : {kafka-testtopic=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)

On Mon, May 22, 2017 at 8:02 AM, Rajini Sivaram 
wrote:

> If you are using auto-create of topics, you also need to grant Create
> access to kaka-cluster.
>
> On Mon, May 22, 2017 at 9:51 AM, Raghav  wrote:
>
> > Hi Rajini
> >
> > I tried again with IP addresses this time, and I get the following error
> > log for the given ACLS. Is there something wrong in the way I am giving
> > user name ?
> >
> > *List of ACL*
> >
> > [root@kafka-dev1 KAFKA]# bin/kafka-acls --authorizer-properties
> > zookeeper.connect=localhost:2181 --add --allow-principal User:CN=kafka2
> > --allow-host 10.10.0.23 --operation Read --operation Write --topic
> > kafka-testtopic
> > Adding ACLs for resource `Topic:kafka-testtopic`:
> > User:CN=kafka2 has Allow permission for operations: Read from
> > hosts: 10.10.0.23
> > User:CN=kafka2 has Allow permission for operations: Write from
> > hosts: 10.10.0.23
> > [root@kafka-dev1 KAFKA]#
> >
> > *Authorizer LOGS*
> >
> > [2017-05-22 06:45:44,520] DEBUG No acl found for resource
> > Cluster:kafka-cluster, authorized = false (kafka.authorizer.logger)
> > [2017-05-22 06:45:44,520] DEBUG Principal = User:CN=kafka2 is Denied
> > Operation = Create from host = 10.10.0.23 on resource =
> > Cluster:kafka-cluster (kafka.authorizer.logger)
> >
> > On Mon, May 22, 2017 at 6:34 AM, Rajini Sivaram  >
> > wrote:
> >
> > > Raghav,
> > >
> > > I don't believe we do reverse DNS lookup for matching ACL hosts. Have
> you
> > > tried defining ACLs with host IP address?
> > >
> > > On Mon, May 22, 2017 at 9:19 AM, Raghav  wrote:
> > >
> > > > Hi
> > > >
> > > > I enabled the DEBUG logs on Kafka authorizer, and I see the following
> > > logs
> > > > for the given ACLs. Am I missing something in my config here ? Any
> help
> > > is
> > > > greatly appreciated. Thanks.
> > > >
> > > >
> > > > *List of ACL*
> > > >
> > > > [root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
> > > > zookeeper.connect=localhost:2181 --list --topic kafka-testtopic
> > > > Current ACLs for resource `Topic:kafka-testtopic`:
> > > > User:* has Allow permission for operations: Read from hosts:
> > bin
> > > > User:CN=kafka2 has Allow permission for operations: Write
> from
> > > > hosts: kafka2.example.com
> > > > User:CN=kafka2 has Allow permission for operations: Read from
> > > > hosts: kafka2.example.com
> > > > [root@kafka1 KAFKA]#
> > > >
> > > >
> > > > *Authorizer LOGS*
> > > >
> > > > [2017-05-22 06:10:16,635] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > > [2017-05-22 06:10:16,736] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > > [2017-05-22 06:10:16,839] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > > [2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
> > > > Operation = Describe from host = 10.10.0.23 on resource =
> > > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > >
> > > >
> > > > Thanks.
> > > >
> > > >
> > > > On Sun, May 21, 2017 at 10:52 PM, Raghav 
> > wrote:
> > > >
> > > > > I tried all possible ways (including the way you suggested
> Michael),
> > > but
> > > > I
> > > > > still get the same 

Re: ACL with SSL is not working

2017-05-22 Thread Raghav
Hi Rajini

I tried again with IP addresses this time, and I get the following error
log for the given ACLS. Is there something wrong in the way I am giving
user name ?

*List of ACL*

[root@kafka-dev1 KAFKA]# bin/kafka-acls --authorizer-properties
zookeeper.connect=localhost:2181 --add --allow-principal User:CN=kafka2
--allow-host 10.10.0.23 --operation Read --operation Write --topic
kafka-testtopic
Adding ACLs for resource `Topic:kafka-testtopic`:
User:CN=kafka2 has Allow permission for operations: Read from
hosts: 10.10.0.23
User:CN=kafka2 has Allow permission for operations: Write from
hosts: 10.10.0.23
[root@kafka-dev1 KAFKA]#

*Authorizer LOGS*

[2017-05-22 06:45:44,520] DEBUG No acl found for resource
Cluster:kafka-cluster, authorized = false (kafka.authorizer.logger)
[2017-05-22 06:45:44,520] DEBUG Principal = User:CN=kafka2 is Denied
Operation = Create from host = 10.10.0.23 on resource =
Cluster:kafka-cluster (kafka.authorizer.logger)

On Mon, May 22, 2017 at 6:34 AM, Rajini Sivaram 
wrote:

> Raghav,
>
> I don't believe we do reverse DNS lookup for matching ACL hosts. Have you
> tried defining ACLs with host IP address?
>
> On Mon, May 22, 2017 at 9:19 AM, Raghav  wrote:
>
> > Hi
> >
> > I enabled the DEBUG logs on Kafka authorizer, and I see the following
> logs
> > for the given ACLs. Am I missing something in my config here ? Any help
> is
> > greatly appreciated. Thanks.
> >
> >
> > *List of ACL*
> >
> > [root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
> > zookeeper.connect=localhost:2181 --list --topic kafka-testtopic
> > Current ACLs for resource `Topic:kafka-testtopic`:
> > User:* has Allow permission for operations: Read from hosts: bin
> > User:CN=kafka2 has Allow permission for operations: Write from
> > hosts: kafka2.example.com
> > User:CN=kafka2 has Allow permission for operations: Read from
> > hosts: kafka2.example.com
> > [root@kafka1 KAFKA]#
> >
> >
> > *Authorizer LOGS*
> >
> > [2017-05-22 06:10:16,635] DEBUG Principal = User:CN=kafka2 is Denied
> > Operation = Describe from host = 10.10.0.23 on resource =
> > Topic:kafka-testtopic (kafka.authorizer.logger)
> > [2017-05-22 06:10:16,736] DEBUG Principal = User:CN=kafka2 is Denied
> > Operation = Describe from host = 10.10.0.23 on resource =
> > Topic:kafka-testtopic (kafka.authorizer.logger)
> > [2017-05-22 06:10:16,839] DEBUG Principal = User:CN=kafka2 is Denied
> > Operation = Describe from host = 10.10.0.23 on resource =
> > Topic:kafka-testtopic (kafka.authorizer.logger)
> > [2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
> > Operation = Describe from host = 10.10.0.23 on resource =
> > Topic:kafka-testtopic (kafka.authorizer.logger)
> >
> >
> > Thanks.
> >
> >
> > On Sun, May 21, 2017 at 10:52 PM, Raghav  wrote:
> >
> > > I tried all possible ways (including the way you suggested Michael),
> but
> > I
> > > still get the same error.
> > >
> > > Is there a step by step guide to get ACLs working in Kafka with SSL ?
> > >
> > > Thanks.
> > >
> > > On Fri, May 19, 2017 at 11:40 AM, Michael Rauter <
> mrau...@anexia-it.com>
> > > wrote:
> > >
> > >> Hi,
> > >>
> > >> with SSL client authentication the user identifier is the dname of the
> > >> certificate
> > >>
> > >> in your case “CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US”
> > >>
> > >> for example when you want to set an ACL rule (read and write for topic
> > >> TOPICNAME from every host):
> > >>
> > >> $ kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181
> > >> --add --allow-principal User:CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US
> > >> --allow-host "*" --operation Read --operation Write --topic TOPICNAME
> > >>
> > >>
> > >> Am 19.05.17, 20:02 schrieb "Raghav" :
> > >>
> > >> If it helps, this is how I generated the keystone for my client
> > >>
> > >> $ keytool -alias kafka-dev2 -validity 365 -keystore
> > >> kafka-dev2.keystore.jks
> > >> -dname "CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US" -genkey -ext SAN=DNS:
> > >> kafka-dev2.example.com -storepass 123456
> > >>
> > >> Anything wrong here ?
> > >>
> > >> On Fri, May 19, 2017 at 10:32 AM, Raghav 
> > >> wrote:
> > >>
> > >> > Hi
> > >> >
> 

Re: ACL with SSL is not working

2017-05-22 Thread Raghav
Hi

I enabled the DEBUG logs on Kafka authorizer, and I see the following logs
for the given ACLs. Am I missing something in my config here ? Any help is
greatly appreciated. Thanks.


*List of ACL*

[root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
zookeeper.connect=localhost:2181 --list --topic kafka-testtopic
Current ACLs for resource `Topic:kafka-testtopic`:
User:* has Allow permission for operations: Read from hosts: bin
User:CN=kafka2 has Allow permission for operations: Write from
hosts: kafka2.example.com
User:CN=kafka2 has Allow permission for operations: Read from
hosts: kafka2.example.com
[root@kafka1 KAFKA]#


*Authorizer LOGS*

[2017-05-22 06:10:16,635] DEBUG Principal = User:CN=kafka2 is Denied
Operation = Describe from host = 10.10.0.23 on resource =
Topic:kafka-testtopic (kafka.authorizer.logger)
[2017-05-22 06:10:16,736] DEBUG Principal = User:CN=kafka2 is Denied
Operation = Describe from host = 10.10.0.23 on resource =
Topic:kafka-testtopic (kafka.authorizer.logger)
[2017-05-22 06:10:16,839] DEBUG Principal = User:CN=kafka2 is Denied
Operation = Describe from host = 10.10.0.23 on resource =
Topic:kafka-testtopic (kafka.authorizer.logger)
[2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
Operation = Describe from host = 10.10.0.23 on resource =
Topic:kafka-testtopic (kafka.authorizer.logger)


Thanks.


On Sun, May 21, 2017 at 10:52 PM, Raghav  wrote:

> I tried all possible ways (including the way you suggested Michael), but I
> still get the same error.
>
> Is there a step by step guide to get ACLs working in Kafka with SSL ?
>
> Thanks.
>
> On Fri, May 19, 2017 at 11:40 AM, Michael Rauter 
> wrote:
>
>> Hi,
>>
>> with SSL client authentication the user identifier is the dname of the
>> certificate
>>
>> in your case “CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US”
>>
>> for example when you want to set an ACL rule (read and write for topic
>> TOPICNAME from every host):
>>
>> $ kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181
>> --add --allow-principal User:CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US
>> --allow-host "*" --operation Read --operation Write --topic TOPICNAME
>>
>>
>> Am 19.05.17, 20:02 schrieb "Raghav" :
>>
>> If it helps, this is how I generated the keystone for my client
>>
>> $ keytool -alias kafka-dev2 -validity 365 -keystore
>> kafka-dev2.keystore.jks
>> -dname "CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US" -genkey -ext SAN=DNS:
>> kafka-dev2.example.com -storepass 123456
>>
>> Anything wrong here ?
>>
>> On Fri, May 19, 2017 at 10:32 AM, Raghav 
>> wrote:
>>
>> > Hi
>> >
>> > I have a SSL setup with Kafka Broker, Producer and Consumer, and it
>> works
>> > fine. I tried to setup ACLs as given on the website. When I start my
>> > producer, I am getting this error:
>> >
>> >
>> > [root@kafka-dev2 KAFKA]# bin/kafka-console-producer --broker-list
>> > kafka-dev1.example.com:9093 --topic test --producer.config
>> > ./etc/kafka/producer.properties
>> >
>> > HelloWorld
>> >
>> > [2017-05-19 10:24:42,437] WARN Error while fetching metadata with
>> > correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION}
>> > (org.apache.kafka.clients.NetworkClient)
>> > [root@kafka-dev2 KAFKA]#
>> >
>> >
>> > server config has the following entries
>> > --------
>> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>> > super.users=User:Bob
>> > 
>> >
>> > When certificate was being generated for Producer (Bob was used in
>> the
>> > CNAME.)
>> >
>> >
>> > Am I missing something here ? Please help
>> >
>> > Thanks.
>> >
>> > Raghav
>> >
>>
>>
>>
>> --
>> Raghav
>>
>>
>>
>
>
> --
> Raghav
>



-- 
Raghav


Re: ACL with SSL is not working

2017-05-21 Thread Raghav
I tried all possible ways (including the way you suggested Michael), but I
still get the same error.

Is there a step by step guide to get ACLs working in Kafka with SSL ?

Thanks.

On Fri, May 19, 2017 at 11:40 AM, Michael Rauter 
wrote:

> Hi,
>
> with SSL client authentication the user identifier is the dname of the
> certificate
>
> in your case “CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US”
>
> for example when you want to set an ACL rule (read and write for topic
> TOPICNAME from every host):
>
> $ kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181
> --add --allow-principal User:CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US
> --allow-host "*" --operation Read --operation Write --topic TOPICNAME
>
>
> Am 19.05.17, 20:02 schrieb "Raghav" :
>
> If it helps, this is how I generated the keystone for my client
>
> $ keytool -alias kafka-dev2 -validity 365 -keystore
> kafka-dev2.keystore.jks
> -dname "CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US" -genkey -ext SAN=DNS:
> kafka-dev2.example.com -storepass 123456
>
> Anything wrong here ?
>
> On Fri, May 19, 2017 at 10:32 AM, Raghav 
> wrote:
>
> > Hi
> >
> > I have a SSL setup with Kafka Broker, Producer and Consumer, and it
> works
> > fine. I tried to setup ACLs as given on the website. When I start my
> > producer, I am getting this error:
> >
> >
> > [root@kafka-dev2 KAFKA]# bin/kafka-console-producer --broker-list
> > kafka-dev1.example.com:9093 --topic test --producer.config
> > ./etc/kafka/producer.properties
> >
> > HelloWorld
> >
> > [2017-05-19 10:24:42,437] WARN Error while fetching metadata with
> > correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION}
> > (org.apache.kafka.clients.NetworkClient)
> > [root@kafka-dev2 KAFKA]#
> >
> >
> > server config has the following entries
> > 
> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> > super.users=User:Bob
> > ----
> >
> > When certificate was being generated for Producer (Bob was used in
> the
> > CNAME.)
> >
> >
> > Am I missing something here ? Please help
> >
> > Thanks.
> >
> > Raghav
> >
>
>
>
> --
> Raghav
>
>
>


-- 
Raghav


Re: ACL with SSL is not working

2017-05-19 Thread Raghav
If it helps, this is how I generated the keystone for my client

$ keytool -alias kafka-dev2 -validity 365 -keystore kafka-dev2.keystore.jks
-dname "CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US" -genkey -ext SAN=DNS:
kafka-dev2.example.com -storepass 123456

Anything wrong here ?

On Fri, May 19, 2017 at 10:32 AM, Raghav  wrote:

> Hi
>
> I have a SSL setup with Kafka Broker, Producer and Consumer, and it works
> fine. I tried to setup ACLs as given on the website. When I start my
> producer, I am getting this error:
>
>
> [root@kafka-dev2 KAFKA]# bin/kafka-console-producer --broker-list
> kafka-dev1.example.com:9093 --topic test --producer.config
> ./etc/kafka/producer.properties
>
> HelloWorld
>
> [2017-05-19 10:24:42,437] WARN Error while fetching metadata with
> correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION}
> (org.apache.kafka.clients.NetworkClient)
> [root@kafka-dev2 KAFKA]#
>
>
> server config has the following entries
> 
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> super.users=User:Bob
> 
>
> When certificate was being generated for Producer (Bob was used in the
> CNAME.)
>
>
> Am I missing something here ? Please help
>
> Thanks.
>
> Raghav
>



-- 
Raghav


ACL with SSL is not working

2017-05-19 Thread Raghav
Hi

I have a SSL setup with Kafka Broker, Producer and Consumer, and it works
fine. I tried to setup ACLs as given on the website. When I start my
producer, I am getting this error:


[root@kafka-dev2 KAFKA]# bin/kafka-console-producer --broker-list
kafka-dev1.example.com:9093 --topic test --producer.config
./etc/kafka/producer.properties

HelloWorld

[2017-05-19 10:24:42,437] WARN Error while fetching metadata with
correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)
[root@kafka-dev2 KAFKA]#


server config has the following entries

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:Bob


When certificate was being generated for Producer (Bob was used in the
CNAME.)


Am I missing something here ? Please help

Thanks.

Raghav


Re: Kafka ACL's with SSL Protocol is not working

2017-05-19 Thread Raghav
gt;
> > >> > > *# ACL SETTINGS
> > >> > #*
> > >> > >
> > >> > > *auto.create.topics.enable=true*
> > >> > >
> > >> > > *authorizer.class.name
> > >> > > <http://authorizer.class.name>=kafka.security.auth.SimpleAcl
> > >> Authorizer*
> > >> > >
> > >> > > *security.inter.broker.protocol=SSL*
> > >> > >
> > >> > > *#allow.everyone.if.no.acl.found=true*
> > >> > >
> > >> > > *#principal.builder.class=CustomizedPrincipalBuilderClass*
> > >> > >
> > >> > > *#super.users=User:"CN=writeuser,OU=Unknown,O=
> > >> > > Unknown,L=Unknown,ST=Unknown,C=Unknown"*
> > >> > >
> > >> > > *#super.users=User:Raghu;User:Admin*
> > >> > >
> > >> > > *#offsets.storage=kafka*
> > >> > >
> > >> > > *#dual.commit.enabled=true*
> > >> > >
> > >> > > *listeners=SSL://10.247.195.122:9093 <http://10.247.195.122:9093
> >*
> > >> > >
> > >> > > *#listeners=PLAINTEXT://10.247.195.122:9092 <
> > >> http://10.247.195.122:9092
> > >> > >*
> > >> > >
> > >> > > *#listeners=PLAINTEXT://10.247.195.122:9092
> > >> > > <http://10.247.195.122:9092>,SSL://10.247.195.122:9093
> > >> > > <http://10.247.195.122:9093>*
> > >> > >
> > >> > > *#advertised.listeners=PLAINTEXT://10.247.195.122:9092
> > >> > > <http://10.247.195.122:9092>*
> > >> > >
> > >> > >
> > >> > > *
> > >> > > ssl.keystore.location=/home/raghu/kafka/security/server.
> > keystore.jks*
> > >> > >
> > >> > > *ssl.keystore.password=123456*
> > >> > >
> > >> > > *ssl.key.password=123456*
> > >> > >
> > >> > > *
> > >> > > ssl.truststore.location=/home/raghu/kafka/security/server.
> > >> > truststore.jks*
> > >> > >
> > >> > > *ssl.truststore.password=123456*
> > >> > >
> > >> > >
> > >> > >
> > >> > > *Set the ACL from Authorizer CLI:*
> > >> > >
> > >> > > > *bin/kafka-acls.sh --authorizer-properties
> > >> > > zookeeper.connect=10.247.195.122:2181 <http://10.247.195.122:2181
> >
> > >> > --list
> > >> > > --topic ssltopic*
> > >> > >
> > >> > > *Current ACLs for resource `Topic:ssltopic`: *
> > >> > >
> > >> > > *  User:CN=writeuser, OU=Unknown, O=Unknown, L=Unknown,
> ST=Unknown,
> > >> > > C=Unknown has Allow permission for operations: Write from hosts:
> * *
> > >> > >
> > >> > >
> > >> > > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$
> > >> bin/kafka-console-producer.sh
> > >> > > --broker-list 10.247.195.122:9093 <http://10.247.195.122:9093>
> > >> --topic
> > >> > > ssltopic --producer.config client-ssl.properties*
> > >> > >
> > >> > >
> > >> > > *[2016-12-13 14:53:45,839] WARN Error while fetching metadata with
> > >> > > correlation id 0 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > >> > > (org.apache.kafka.clients.NetworkClient)*
> > >> > >
> > >> > > *[2016-12-13 14:53:45,984] WARN Error while fetching metadata with
> > >> > > correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > >> > > (org.apache.kafka.clients.NetworkClient)*
> > >> > >
> > >> > >
> > >> > > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ cat
> client-ssl.properties*
> > >> > >
> > >> > > *#group.id <http://group.id>=sslgroup*
> > >> > >
> > >> > > *security.protocol=SSL*
> > >> > >
> > >> > > *ssl.truststore.location=/Users/rbaddam/Desktop/Dev/
> > >> > > kafka_2.11-0.10.1.0/ssl/client.truststore.jks*
> > >> > >
> > >> > > *ssl.truststore.password=123456*
> > >> > >
> > >> > > * #Configure Below if you use Client Auth*
> > >> > >
> > >> > >
> > >> > > *ssl.keystore.location=/Users/rbaddam/Desktop/Dev/kafka_2.
> > >> > > 11-0.10.1.0/ssl/client.keystore.jks*
> > >> > >
> > >> > > *ssl.keystore.password=123456*
> > >> > >
> > >> > > *ssl.key.password=123456*
> > >> > >
> > >> > >
> > >> > > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$
> > >> bin/kafka-console-consumer.sh
> > >> > > --bootstrap-server 10.247.195.122:9093 <
> http://10.247.195.122:9093>
> > >> > > --new-consumer --consumer.config client-ssl.properties --topic
> > >> ssltopic
> > >> > > --from-beginning*
> > >> > >
> > >> > > *[2016-12-13 14:53:28,817] WARN Error while fetching metadata with
> > >> > > correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > >> > > (org.apache.kafka.clients.NetworkClient)*
> > >> > >
> > >> > > *[2016-12-13 14:53:28,819] ERROR Unknown error when running
> > consumer:
> > >> > > (kafka.tools.ConsoleConsumer$)*
> > >> > >
> > >> > > *org.apache.kafka.common.errors.GroupAuthorizationException: Not
> > >> > > authorized to access group: console-consumer-52826*
> > >> > >
> > >> > >
> > >> > > Thanks in advance,
> > >> > >
> > >> > > Raghu - raghu98...@gmail.com
> > >> > > This e-mail and its contents (to include attachments) are the
> > >> property of
> > >> > > National Health Systems, Inc., its subsidiaries and affiliates,
> > >> including
> > >> > > but not limited to Rx.com Community Healthcare Network, Inc. and
> its
> > >> > > subsidiaries, and may contain confidential and proprietary or
> > >> privileged
> > >> > > information. If you are not the intended recipient of this e-mail,
> > you
> > >> > are
> > >> > > hereby notified that any unauthorized disclosure, copying, or
> > >> > distribution
> > >> > > of this e-mail or of its attachments, or the taking of any
> > >> unauthorized
> > >> > > action based on information contained herein is strictly
> prohibited.
> > >> > > Unauthorized use of information contained herein may subject you
> to
> > >> civil
> > >> > > and criminal prosecution and penalties. If you are not the
> intended
> > >> > > recipient, please immediately notify the sender by telephone at
> > >> > > 800-433-5719 or return e-mail and permanently delete the original
> > >> > e-mail.
> > >> > >
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > G.Kiran Kumar
> > >
> >
> >
> >
> > --
> > G.Kiran Kumar
> >
>



-- 
Raghav


Re: Securing Kafka - Keystore and Truststore question

2017-05-18 Thread Raghav
Rajini

I just tried this. It turns out that I can't import cert-file by itself in
trust store until it is signed by a CA. Could be because of the format ?
Any idea here ...

In the above steps, if I sign the server-cert-file and client-cert-file by
a private CA then I can add them to trust store and key store. In this
test, I did not add the CA cert in either keystone or trust store.

Thanks for all your help.




On Thu, May 18, 2017 at 8:26 AM, Rajini Sivaram 
wrote:

> Raghav,
>
> Perhaps what you want to do is:
>
> *You do (for the brokers):*
>
> Generate key-pair for broker:
>
> keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365
> -genkey
>
> Export certificate to a file to send to your customers:
>
> keytool -exportcert -file server-cert-file -keystore
> kafka.server.keystore.jks -alias localhost
>
>
> And you send server-cert-file to your customers.
>
> Once you get your customer's client-cert-file, you do:
>
> keytool -importcert -file client-cert-file -keystore
> kafka.server.truststore.jks -alias customerA
>
> If you are using SSL for inter-broker communication, your broker
> certificate also needs to be in the server truststore:
>
> keytool -importcert -file server-cert-file -keystore
> kafka.client.truststore.jks -alias broker
>
>
> *Your customers do (for the clients):*
>
> Generate key-pair for client:
>
> keytool -keystore kafka.client.keystore.jks -alias localhost -validity 365
> -genkey
>
> Export certificate to a file to send to to you:
>
> keytool -exportcert -file client-cert-file -keystore
> kafka.client.keystore.jks -alias localhost
>
>
> Your customers send you their client-cert-file
>
> Your customers create their truststore using the broker certificate
> server-cert-file that you send to them:
>
> keytool -importcert -file server-cert-file -keystore
> kafka.client.truststore.jks -alias broker
>
>
>
> You then configure your brokers with (kafka.server.keystore.jks, ka
> fka.server.truststore.jks).Your customers configure their clients with (
> kafka.client.keystore.jks, kafka.client.truststore.jks).
>
>
> Hope that helps.
>
> Regards,
>
> Rajini
>
>
>
> On Thu, May 18, 2017 at 10:33 AM, Raghav  wrote:
>
>> Rajini,
>>
>> Sure, will submit a PR shortly.
>>
>> Your answer is very helpful, but I think I did not put the question
>> correctly. Pardon my ignore but I am still trying to get my ways around
>> Kafka security.
>>
>> I was trying to understand, can we (Kafka Broker) just add the
>> certificate (unsigned or signed) from customer to our trust store without
>> adding the CA cert to trust store... could that work ?
>>
>> 1. Let's say Kafka broker (there is only 1 for simplicity) generates a
>> keystore and generates a key using the command below
>>
>> keytool -keystore kafka.server.keystore.jks -alias localhost -validity *365* 
>> -genkey
>>
>> keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file 
>> server-cert-file
>>
>> 2. Similarly, Kafka Client (Producer) does the same
>>
>> keytool -keystore kafka.client.keystore.jks -alias localhost -validity *365* 
>> -genkey
>>
>> keytool -keystore kafka.client.keystore.jks -alias localhost -certreq -file 
>> client-cert-file
>>
>>
>> 3. Now, we add *client-cert-file* into the trust store of server, and
>> *server-cert-file* into the trust store of client. Given that each trust
>> store has other party's certificate in their trust store, does CA
>> certificate come into the picture ?
>>
>> On Thu, May 18, 2017 at 6:26 AM, Rajini Sivaram 
>> wrote:
>>
>>> Raghav,
>>>
>>> Yes, you can create a truststore with your customers' certificates and
>>> vice-versa. It will be best to give your CA certificate to your customers
>>> and get the CA certificate from each of your customers and add them to your
>>> broker's truststore. You can both then create additional certificates if
>>> you need without any changes to your truststore as long as the CA
>>> certificates are valid. Unlike certificates signed by a trusted authority,
>>> you will need to add the CAs of every customer to your truststore. Kafka
>>> brokers don't reload certificates, so if you wanted to add another
>>> customer's certificate to your truststore, you will need to restart your
>>> broker.
>>>
>>> Would you like to submit a PR with the information that is missing in
>>> the Apache Kafka documentation that you think may be useful?
&g

Re: Securing Kafka - Keystore and Truststore question

2017-05-18 Thread Raghav
Rajini,

Sure, will submit a PR shortly.

Your answer is very helpful, but I think I did not put the question
correctly. Pardon my ignore but I am still trying to get my ways around
Kafka security.

I was trying to understand, can we (Kafka Broker) just add the certificate
(unsigned or signed) from customer to our trust store without adding the CA
cert to trust store... could that work ?

1. Let's say Kafka broker (there is only 1 for simplicity) generates a
keystore and generates a key using the command below

keytool -keystore kafka.server.keystore.jks -alias localhost -validity
*365* -genkey

keytool -keystore kafka.server.keystore.jks -alias localhost -certreq
-file server-cert-file

2. Similarly, Kafka Client (Producer) does the same

keytool -keystore kafka.client.keystore.jks -alias localhost -validity
*365* -genkey

keytool -keystore kafka.client.keystore.jks -alias localhost -certreq
-file client-cert-file


3. Now, we add *client-cert-file* into the trust store of server, and
*server-cert-file* into the trust store of client. Given that each trust
store has other party's certificate in their trust store, does CA
certificate come into the picture ?

On Thu, May 18, 2017 at 6:26 AM, Rajini Sivaram 
wrote:

> Raghav,
>
> Yes, you can create a truststore with your customers' certificates and
> vice-versa. It will be best to give your CA certificate to your customers
> and get the CA certificate from each of your customers and add them to your
> broker's truststore. You can both then create additional certificates if
> you need without any changes to your truststore as long as the CA
> certificates are valid. Unlike certificates signed by a trusted authority,
> you will need to add the CAs of every customer to your truststore. Kafka
> brokers don't reload certificates, so if you wanted to add another
> customer's certificate to your truststore, you will need to restart your
> broker.
>
> Would you like to submit a PR with the information that is missing in the
> Apache Kafka documentation that you think may be useful?
>
> Regards,
>
> Rajini
>
> On Wed, May 17, 2017 at 6:21 PM, Raghav  wrote:
>
>> Another quick question:
>>
>> Say we chose to add our customer's certificates directly to our brokers
>> trust store and vice verse, could that work ? There is no documentation on
>> Kafka or Confluent site for this ?
>>
>> Thanks.
>>
>>
>> On Wed, May 17, 2017 at 1:56 PM, Rajini Sivaram 
>> wrote:
>>
>>> Raghav,
>>>
>>> 1. Yes, your customers can use certificates signed by a trusted
>>> authority. You can simply omit the truststore configuration for your broker
>>> in server.properties, and Kafka would use the default, which will trust the
>>> client certificates. If your brokers are using SSL for inter-broker
>>> communication and you are still using your private CA for broker's
>>> keystore, then you will need two separate endpoints in your listener
>>> configuration, one for your customer's clients and another for inter-broker
>>> communication so that you can specify a truststore with your private
>>> ca-cert for your broker connections.
>>>
>>> 2. Yes, all the commands can specify password on the command line, so
>>> you should be able to generate all the stores using a script without any
>>> interactions.
>>>
>>> Regards,
>>>
>>> Rajini
>>>
>>>
>>> On Wed, May 17, 2017 at 2:49 PM, Raghav  wrote:
>>>
>>>> One follow up questions Rajini:
>>>>
>>>> 1. Can we use some other mechanism like have our customer's use a well
>>>> known CA which JKS understands, and in that case we don't have to ask our
>>>> customers to do this certificate-in and certificate-out thing ? I am just
>>>> trying to understand if we can make our customer's workflow easier.
>>>> Anything else that you can suggest here
>>>>
>>>> 2. Can we automate the key gen steps mentioned on apache website and
>>>> adding to keystone and trust store so that we don't have to manually supply
>>>> password ? Currently, everytime I tried to do steps mentioned in
>>>> https://kafka.apache.org/documentation/#security I have to manually
>>>> give password. It would be great if we can automate this process either
>>>> through script or Java code. Any suggestions ...
>>>>
>>>>
>>>> Many thanks.
>>>>
>>>> On Tue, May 16, 2017 at 10:58 AM, Raghav  wrote:
>>>>
>>>>> Many thanks, Rajini.
>>>>

Re: Securing Kafka - Keystore and Truststore question

2017-05-17 Thread Raghav
Another quick question:

Say we chose to add our customer's certificates directly to our brokers
trust store and vice verse, could that work ? There is no documentation on
Kafka or Confluent site for this ?

Thanks.


On Wed, May 17, 2017 at 1:56 PM, Rajini Sivaram 
wrote:

> Raghav,
>
> 1. Yes, your customers can use certificates signed by a trusted authority.
> You can simply omit the truststore configuration for your broker in
> server.properties, and Kafka would use the default, which will trust the
> client certificates. If your brokers are using SSL for inter-broker
> communication and you are still using your private CA for broker's
> keystore, then you will need two separate endpoints in your listener
> configuration, one for your customer's clients and another for inter-broker
> communication so that you can specify a truststore with your private
> ca-cert for your broker connections.
>
> 2. Yes, all the commands can specify password on the command line, so you
> should be able to generate all the stores using a script without any
> interactions.
>
> Regards,
>
> Rajini
>
>
> On Wed, May 17, 2017 at 2:49 PM, Raghav  wrote:
>
>> One follow up questions Rajini:
>>
>> 1. Can we use some other mechanism like have our customer's use a well
>> known CA which JKS understands, and in that case we don't have to ask our
>> customers to do this certificate-in and certificate-out thing ? I am just
>> trying to understand if we can make our customer's workflow easier.
>> Anything else that you can suggest here
>>
>> 2. Can we automate the key gen steps mentioned on apache website and
>> adding to keystone and trust store so that we don't have to manually supply
>> password ? Currently, everytime I tried to do steps mentioned in
>> https://kafka.apache.org/documentation/#security I have to manually give
>> password. It would be great if we can automate this process either through
>> script or Java code. Any suggestions ...
>>
>>
>> Many thanks.
>>
>> On Tue, May 16, 2017 at 10:58 AM, Raghav  wrote:
>>
>>> Many thanks, Rajini.
>>>
>>> On Tue, May 16, 2017 at 8:43 AM, Rajini Sivaram >> > wrote:
>>>
>>>> Hi Raghav,
>>>>
>>>> If your Kafka broker is configured with *ssl.client.auth=required,* your
>>>> customer's clients need to provide a keystore. In any case, they need a
>>>> truststore since your broker is using SSL. For the truststore, you can
>>>> given them ca-cert, as you mentioned. Client keystore contains a
>>>> certificate and a private key.
>>>>
>>>> In the round-trip you described, customers generate the keys and give
>>>> you the certificate signing request, keeping their private key private. You
>>>> then send them back a signed certificate that goes into their keystore.
>>>> This is the standard way of signing and is secure.
>>>>
>>>> In the single step scenario that you described, you generate the
>>>> customer's key-pair consisting of certificate and private key. You then
>>>> need to send them both the signed certificate and the private key. This is
>>>> less secure. Unlike the round-trip, you now have the private key of the
>>>> customer.
>>>>
>>>> Regards,
>>>>
>>>> Rajini
>>>>
>>>>
>>>> On Tue, May 16, 2017 at 10:47 AM, Raghav  wrote:
>>>>
>>>>> Hi Rajini
>>>>>
>>>>> This was very helpful. I have another questions on similar lines.
>>>>>
>>>>> We host Kafka Broker, and we also have our own private CA. We want our
>>>>> customers to setup their Kafka Clients (Producer and Consumer) using SSL
>>>>> using *ssl.client.auth=required*.
>>>>>
>>>>> Is there a way, we can generate certificate for our clients, sign it
>>>>> using our private CA, and then hand over our customers these  two
>>>>> certificates (1. ca-cert 2. cert-signed), which if they add to their
>>>>> keystroke and truststore, they can send message to our Kafka brokers while
>>>>> keeping *ssl.client.auth=required*.
>>>>>
>>>>> We are looking to minimize our customer's pre-setup steps. For example
>>>>> in normal scenario, customers will need to generate certificate, and hand
>>>>> over their certificate request to our private CA, which we then sign it,
>>>>> and send t

Re: Securing Kafka - Keystore and Truststore question

2017-05-17 Thread Raghav
One follow up questions Rajini:

1. Can we use some other mechanism like have our customer's use a well
known CA which JKS understands, and in that case we don't have to ask our
customers to do this certificate-in and certificate-out thing ? I am just
trying to understand if we can make our customer's workflow easier.
Anything else that you can suggest here

2. Can we automate the key gen steps mentioned on apache website and adding
to keystone and trust store so that we don't have to manually supply
password ? Currently, everytime I tried to do steps mentioned in
https://kafka.apache.org/documentation/#security I have to manually give
password. It would be great if we can automate this process either through
script or Java code. Any suggestions ...


Many thanks.

On Tue, May 16, 2017 at 10:58 AM, Raghav  wrote:

> Many thanks, Rajini.
>
> On Tue, May 16, 2017 at 8:43 AM, Rajini Sivaram 
> wrote:
>
>> Hi Raghav,
>>
>> If your Kafka broker is configured with *ssl.client.auth=required,* your
>> customer's clients need to provide a keystore. In any case, they need a
>> truststore since your broker is using SSL. For the truststore, you can
>> given them ca-cert, as you mentioned. Client keystore contains a
>> certificate and a private key.
>>
>> In the round-trip you described, customers generate the keys and give you
>> the certificate signing request, keeping their private key private. You
>> then send them back a signed certificate that goes into their keystore.
>> This is the standard way of signing and is secure.
>>
>> In the single step scenario that you described, you generate the
>> customer's key-pair consisting of certificate and private key. You then
>> need to send them both the signed certificate and the private key. This is
>> less secure. Unlike the round-trip, you now have the private key of the
>> customer.
>>
>> Regards,
>>
>> Rajini
>>
>>
>> On Tue, May 16, 2017 at 10:47 AM, Raghav  wrote:
>>
>>> Hi Rajini
>>>
>>> This was very helpful. I have another questions on similar lines.
>>>
>>> We host Kafka Broker, and we also have our own private CA. We want our
>>> customers to setup their Kafka Clients (Producer and Consumer) using SSL
>>> using *ssl.client.auth=required*.
>>>
>>> Is there a way, we can generate certificate for our clients, sign it
>>> using our private CA, and then hand over our customers these  two
>>> certificates (1. ca-cert 2. cert-signed), which if they add to their
>>> keystroke and truststore, they can send message to our Kafka brokers while
>>> keeping *ssl.client.auth=required*.
>>>
>>> We are looking to minimize our customer's pre-setup steps. For example
>>> in normal scenario, customers will need to generate certificate, and hand
>>> over their certificate request to our private CA, which we then sign it,
>>> and send them signed certificate and private CA's certificate. So there is
>>> one round trip. Just wondering if we can reduce this 2 step into 1 step.
>>>
>>> Thanks.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, May 12, 2017 at 8:53 AM, Rajini Sivaram >> > wrote:
>>>
>>>> Raqhav,
>>>>
>>>> 1. Clients need a keystore if you are using TLS client authentication.
>>>> To
>>>> enable client authentication, you need to configure ssl.client.auth in
>>>> server.properties. This can be set to required|requested|none. If you
>>>> don't
>>>> enable client authentication, any client will be able to connect to your
>>>> broker. You could alternatively use SASL for client authentication.
>>>> .
>>>> 2. Client keystore is mandatory if ssl.client.auth=required, optional
>>>> for
>>>> requested and not used for none. The truststore configured on the
>>>> client is
>>>> used to authenticate the server. So you have to provide it unless your
>>>> broker is using certificates signed by a trusted authority.
>>>>
>>>> Hope that helps.
>>>>
>>>> Rajini
>>>>
>>>> On Fri, May 12, 2017 at 11:35 AM, Raghav  wrote:
>>>>
>>>> > Hi
>>>> >
>>>> > I read the documentation here:
>>>> > https://kafka.apache.org/documentation/#security_ssl
>>>> >
>>>> > I have few questions about trust store and keystore based on this
>>>> scenario:
>>>> >
>>>> > We have 5 Kafka Brokers in our cluster. We want our clients to write
>>>> to our
>>>> > Kafka brokers in a secure way. Suppose, we also host a private CA as
>>>> > mentioned in the documentation above, and provide our clients the
>>>> *ca-cert*
>>>> > file, which they add it to their trust store.
>>>> >
>>>> > 1. Do we require our clients to generate their certificate and have it
>>>> > signed by our private CA, and add it to their keystore?
>>>> >
>>>> > 2. When is keystore used by clients, and when is truststore used by
>>>> clients
>>>> > ?
>>>> >
>>>> >
>>>> > Thanks.
>>>> >
>>>> > --
>>>> > R
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> Raghav
>>>
>>
>>
>
>
> --
> Raghav
>



-- 
Raghav


  1   2   >