Re: Error for partition [__consumer_offsets,15] to broker

2017-12-08 Thread R Krishna
This is a known issue for us in 0.10 due to network related problems with
ZK causing no leader exception and restarting quickly fixed it. You can
increase time out to alleviate the problem a bit.

On Dec 8, 2017 8:20 PM, "Abhit Kalsotra"  wrote:

> Guys can I get any reply of help on the same.. this has been occuring very
> frequently in my production environment.. Please help..
>
> Abhi
>
> On Dec 6, 2017 13:24, "Abhit Kalsotra"  wrote:
>
> > Hello *
> >
> > I am running Kafka(*0.10.2.0*) on windows from the past one year ...
> >
> > But off late there has been unique Broker issues that I have observed 4-5
> > times in
> > last 4 months.
> >
> > Kafka setup cofig...
> >
> >
> > *3 ZK Instances running on 3 different Windows Servers, 7 Kafka Broker
> > nodes running on single windows machine with different disk for logs
> > directory *
> >
> > *My Kafka has 2 Topics with partition size 50 each , and replication
> > factor of 3.*
> >
> > *My partition logic selection*: Each message has a unique ID and logic of
> > selecting partition is ( unique ID % 50), and then calling Kafka producer
> > API to route a specific message to a particular topic partition .
> >
> > But of-late there has been a unique case that's cropping out in Kafka
> > broker nodes,
> > [2017-12-02 02:47:40,024] ERROR [ReplicaFetcherThread-0-4], Error for
> > partition [__consumer_offsets,15] to broker 4:org.apache.kafka.common.
> > errors.NotLeaderForPartitionException: This server is not the leader for
> > that topic-partition. (kafka.server.ReplicaFetcherThread)
> >
> > The entire server.log is filled with these logs, and its very huge too ,
> > please help me in understanding under what circumstances these can occur,
> > and what measures I need to take..
> >
> > Courtesy
> > Abhi
> > !wq
> >
> > --
> >
> > If you can't succeed, call it version 1.0
> >
>


Re: Error for partition [__consumer_offsets,15] to broker

2017-12-08 Thread Abhit Kalsotra
Guys can I get any reply of help on the same.. this has been occuring very
frequently in my production environment.. Please help..

Abhi

On Dec 6, 2017 13:24, "Abhit Kalsotra"  wrote:

> Hello *
>
> I am running Kafka(*0.10.2.0*) on windows from the past one year ...
>
> But off late there has been unique Broker issues that I have observed 4-5
> times in
> last 4 months.
>
> Kafka setup cofig...
>
>
> *3 ZK Instances running on 3 different Windows Servers, 7 Kafka Broker
> nodes running on single windows machine with different disk for logs
> directory *
>
> *My Kafka has 2 Topics with partition size 50 each , and replication
> factor of 3.*
>
> *My partition logic selection*: Each message has a unique ID and logic of
> selecting partition is ( unique ID % 50), and then calling Kafka producer
> API to route a specific message to a particular topic partition .
>
> But of-late there has been a unique case that's cropping out in Kafka
> broker nodes,
> [2017-12-02 02:47:40,024] ERROR [ReplicaFetcherThread-0-4], Error for
> partition [__consumer_offsets,15] to broker 4:org.apache.kafka.common.
> errors.NotLeaderForPartitionException: This server is not the leader for
> that topic-partition. (kafka.server.ReplicaFetcherThread)
>
> The entire server.log is filled with these logs, and its very huge too ,
> please help me in understanding under what circumstances these can occur,
> and what measures I need to take..
>
> Courtesy
> Abhi
> !wq
>
> --
>
> If you can't succeed, call it version 1.0
>


Re: Error for partition [__consumer_offsets,15] to broker

2017-12-08 Thread Abhit Kalsotra
And this is my typical broker config

broker.id=0
port:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
offsets.retention.minutes=360
advertised.host.name=1.1.1.2
advertised.port:9093
ctories under which to store log files
log.dirs=C:\\kafka_2.10-0.10.2.0-SNAPSHOT\\data\\kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.minutes=360
log.segment.bytes=52428800
log.retention.check.interval.ms=30
log.cleaner.enable=true
log.cleanup.policy=delete
log.cleaner.min.cleanable.ratio=0.5
log.cleaner.backoff.ms=15000
log.segment.delete.delay.ms=6000
auto.create.topics.enable=false
zookeeper.connect=1.1.1.2:2181,1.1.1.3:2182,1.1.1.4:2183
zookeeper.connection.timeout.ms=6000


And this is third consecutive Saturday when it occurred and I have to
literally stop all the Kafka producers, delete the respective  data
directories of all kafka brokers and then start it again :( This does not
looks scalable , please try to guide me where I should rack my head ?

Courtesy
Abhit
!wq

On Sat, Dec 9, 2017 at 9:23 AM, Abhit Kalsotra  wrote:

> Guys can I get any reply of help on the same.. this has been occuring very
> frequently in my production environment.. Please help..
>
> Abhi
>
> On Dec 6, 2017 13:24, "Abhit Kalsotra"  wrote:
>
>> Hello *
>>
>> I am running Kafka(*0.10.2.0*) on windows from the past one year ...
>>
>> But off late there has been unique Broker issues that I have observed 4-5
>> times in
>> last 4 months.
>>
>> Kafka setup cofig...
>>
>>
>> *3 ZK Instances running on 3 different Windows Servers, 7 Kafka Broker
>> nodes running on single windows machine with different disk for logs
>> directory *
>>
>> *My Kafka has 2 Topics with partition size 50 each , and replication
>> factor of 3.*
>>
>> *My partition logic selection*: Each message has a unique ID and logic
>> of selecting partition is ( unique ID % 50), and then calling Kafka
>> producer API to route a specific message to a particular topic partition .
>>
>> But of-late there has been a unique case that's cropping out in Kafka
>> broker nodes,
>> [2017-12-02 02:47:40,024] ERROR [ReplicaFetcherThread-0-4], Error for
>> partition [__consumer_offsets,15] to broker 4:org.apache.kafka.common.erro
>> rs.NotLeaderForPartitionException: This server is not the leader for
>> that topic-partition. (kafka.server.ReplicaFetcherThread)
>>
>> The entire server.log is filled with these logs, and its very huge too ,
>> please help me in understanding under what circumstances these can occur,
>> and what measures I need to take..
>>
>> Courtesy
>> Abhi
>> !wq
>>
>> --
>>
>> If you can't succeed, call it version 1.0
>>
>


-- 
If you can't succeed, call it version 1.0


Re: How can I repartition/rebalance topics processed by a Kafka Streams topology?

2017-12-08 Thread Matthias J. Sax
Hard to give a generic answer.

1. We recommend to over-partitions your input topics to start with (to
avoid that you need to add new partitions later on); problem avoidance
is the best strategy. There will be some overhead for this obviously on
the broker side, but it's not too big.

2. Not sure why you would need a new cluster? You can just create a new
topic in the same cluster and let Kafka Streams read from there.

3. Depending on your state requirements, you could also run two
applications in parallel -- the new one reads from the new input topic
with more partitions and you configure your producer to write to the new
topic (or maybe even to dual writes to both). If your new application is
ramped up, you can stop the old one.

4. If you really need to add new partitions, you need to fix up all
topics manually -- including all topics Kafka Streams created for you.
Adding partitions messes up all your state shared as key-based
partitioning changes. This implies that you application must be stopped!
Thus, if you have zero downtime requirements you can't do this at all.

5. If you have a stateless application all those issues go away though
and you can even add new partitions during runtime.


Hope this helps.


-Matthias



On 12/8/17 11:02 AM, Dmitry Minkovsky wrote:
> I am about to put a topology into production and I am concerned that I
> don't know how to repartition/rebalance the topics in the event that I need
> to add more partitions.
> 
> My inclination is that I should spin up a new cluster and run some kind of
> consumer/producer combination that takes data from the previous cluster and
> writes it to the new cluster. A new instance of the Kafka Streams
> application then works against this new cluster. But I'm not sure how to
> best execute this, or whether this approach is sound at all. I am imagining
> many things may go wrong. Without going into further speculation, what is
> the best way to do this?
> 
> Thank you,
> Dmitry
> 



signature.asc
Description: OpenPGP digital signature


Re: Configuration: Retention and compaction

2017-12-08 Thread Matthias J. Sax
It does not. The current segment is open for writing and only closed
(ie, rolled) segments are considered.

There is a bunch of broker/topic configs that you can play with to
influence log rolling and compaction.

-Matthias

On 12/8/17 10:53 AM, Dmitry Minkovsky wrote:
> Matthias, you read my mind—having examined Kafka Streams intermediate topic
> configs and then Googled my way to KIP-71
> ,
> I was confused about this dual policy. Thank you.
> 
> Still wondering about my second question though: does deletion/compaction
> affect the currently opened log segment? Seems like it cannot.
> 
> 
> 
> 
> On Mon, Dec 4, 2017 at 2:54 PM, Matthias J. Sax 
> wrote:
> 
>> Topic can be configured in "dual" mode too via
>>
 cleanup.policy="delete,compact"
>>
>> For this case, `retention.ms` is basically a TTL for a key that is not
>> updated for this amount of time.
>>
>>
>> -Matthias
>>
>>
>>
>> On 12/3/17 11:54 AM, Jan Filipiak wrote:
>>> Hi
>>>
>>> the only retention time that applies for compacted topics is the
>>> delete.retention.ms
>>> The duration that tombstones for deletes will be kept in the topic
>>> during compaction.
>>>
>>> A very detail explaination on what is going on can be found here:
>>>
>>> https://kafka.apache.org/documentation/#compaction
>>>
>>> Hope this helps
>>>
>>> Best Jan
>>>
>>>
>>> On 03.12.2017 20:27, Dmitry Minkovsky wrote:
 This is a pretty stupid question. Mostly likely I should verify these by
 observation, but really I want to verify that my understanding of the
 documentation is correct:

 Suppose I have topic configurations like:

 retention.ms=$time
 cleanup.policy=compact


 My questions are:

 1. After $time, any offsets older than $time will be eligible for
 compaction?
 2. Regardless of $time, any offsets in the current segment will
 not be
 compacted?


 Thank you,
 Dmitry

>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


How can I repartition/rebalance topics processed by a Kafka Streams topology?

2017-12-08 Thread Dmitry Minkovsky
I am about to put a topology into production and I am concerned that I
don't know how to repartition/rebalance the topics in the event that I need
to add more partitions.

My inclination is that I should spin up a new cluster and run some kind of
consumer/producer combination that takes data from the previous cluster and
writes it to the new cluster. A new instance of the Kafka Streams
application then works against this new cluster. But I'm not sure how to
best execute this, or whether this approach is sound at all. I am imagining
many things may go wrong. Without going into further speculation, what is
the best way to do this?

Thank you,
Dmitry


Re: Configuration: Retention and compaction

2017-12-08 Thread Dmitry Minkovsky
Matthias, you read my mind—having examined Kafka Streams intermediate topic
configs and then Googled my way to KIP-71
,
I was confused about this dual policy. Thank you.

Still wondering about my second question though: does deletion/compaction
affect the currently opened log segment? Seems like it cannot.




On Mon, Dec 4, 2017 at 2:54 PM, Matthias J. Sax 
wrote:

> Topic can be configured in "dual" mode too via
>
> >> cleanup.policy="delete,compact"
>
> For this case, `retention.ms` is basically a TTL for a key that is not
> updated for this amount of time.
>
>
> -Matthias
>
>
>
> On 12/3/17 11:54 AM, Jan Filipiak wrote:
> > Hi
> >
> > the only retention time that applies for compacted topics is the
> > delete.retention.ms
> > The duration that tombstones for deletes will be kept in the topic
> > during compaction.
> >
> > A very detail explaination on what is going on can be found here:
> >
> > https://kafka.apache.org/documentation/#compaction
> >
> > Hope this helps
> >
> > Best Jan
> >
> >
> > On 03.12.2017 20:27, Dmitry Minkovsky wrote:
> >> This is a pretty stupid question. Mostly likely I should verify these by
> >> observation, but really I want to verify that my understanding of the
> >> documentation is correct:
> >>
> >> Suppose I have topic configurations like:
> >>
> >> retention.ms=$time
> >> cleanup.policy=compact
> >>
> >>
> >> My questions are:
> >>
> >> 1. After $time, any offsets older than $time will be eligible for
> >> compaction?
> >> 2. Regardless of $time, any offsets in the current segment will
> >> not be
> >> compacted?
> >>
> >>
> >> Thank you,
> >> Dmitry
> >>
> >
>
>


Re: Kafka Monitoring

2017-12-08 Thread Michal Michalski
Hi,

We have no modifications in that file - what we do is having a "wrapper"
that's just a Docker "entrypoint" (just a bash script) which contents is:

export KAFKA_OPTS="$KAFKA_OPTS
-javaagent:/jolokia-jvm-agent.jar=port=8074,host=0.0.0.0"
exec ${KAFKA_DIR}/bin/kafka-server-start.sh
${KAFKA_DIR}/config/server.properties

Regarding your second question - we have an in-house monitoring solution
that allows querying that endpoint and extracting metrics from the JSON
returned.

If you're asking about what you should monitor, I think these links will
answer your question:
https://docs.confluent.io/3.0.0/kafka/monitoring.html
https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/


On 8 December 2017 at 14:56, Irtiza Ali  wrote:

> Thanks Michal, can you kindly send me you kafka-run-class.sh and
> kafka-server-start.sh file, I have look what have you done. Because I have
> done same thing that you explained above but when i do this <
> http://localhost:/jolokia/list> i get only metrics for the zookeeper
> but not the above metrics.
>
> How are you using Jolokia for monitoring kafka cluster??
>
> Thanks in advance
>
> On Fri, Dec 8, 2017 at 3:10 PM, Michal Michalski <
> michal.michal...@zalando.ie> wrote:
>
> > Hi Irtiza,
> >
> > I don't have any tutorial, but I can tell you what we do :-)
> >
> > First of all we have Jolokia agent jar included in our Kafka Docker
> image.
> > Then we append this to KAFKA_OPTS
> >
> > -javaagent:/jolokia-jvm-agent.jar=port=8074,host=0.0.0.0
> >
> >
> > Relevant port is then "exposed" in Docker image and "allowed" in AWS
> > Security Group.
> > Then our monitoring tool is querying the following endpoins (just a HTTP
> > GET query) these endpoints to get a list of all the metrics we need:
> >
> > :8074/jolokia/read/kafka.server:*
> > :8074/jolokia/read/kafka.controller:*
> > :8074/jolokia/read/kafka.log:*
> > :8074/jolokia/read/kafka.network:*
> > :8074/jolokia/read/java.lang:type=Memory
> >
> >
> > And that's it, we get a nice JSON that we can use the way we want :-)
> > I hope I didn't miss anything, but this should be it.
> >
> > M.
> >
> >
> > On 8 December 2017 at 09:28, Irtiza Ali  wrote:
> >
> > > Hello Michal,
> > >
> > > Can you send me link to tutorial or provide some resources for the
> > Jolokia
> > > configuration with kafka>
> > >
> > >
> > > Thank you
> > > Irtiza
> > >
> > > On Wed, Dec 6, 2017 at 8:00 PM, Michal Michalski <
> > > michal.michal...@zalando.ie> wrote:
> > >
> > > > Hi Irtiza,
> > > >
> > > > We're using Jolokia and we had no problems with it.
> > > > It would be useful to know what exactly you did (how you "plugged in"
> > > > Jolokia, how you configured it, what endpoint are you querying etc.)
> to
> > > > help you.
> > > >
> > > > On 6 December 2017 at 10:36, Irtiza Ali  wrote:
> > > >
> > > > > Hello everyone,
> > > > >
> > > > > I am working python based Kafka monitoring application. I am unable
> > to
> > > > > figure out how to retrieve the metrics using Jolokia. I have enable
> > the
> > > > > port for metrics retrieval to .
> > > > >
> > > > > I Have two questions
> > > > >
> > > > > 1) Is there something that I am not doing correctly.
> > > > > 2) Is there some other way to do it.
> > > > >
> > > > > With Regards
> > > > > Irtiza Alli
> > > > >
> > > >
> > >
> >
>


Re: Kafka Monitoring

2017-12-08 Thread Irtiza Ali
Thanks Michal, can you kindly send me you kafka-run-class.sh and
kafka-server-start.sh file, I have look what have you done. Because I have
done same thing that you explained above but when i do this <
http://localhost:/jolokia/list> i get only metrics for the zookeeper
but not the above metrics.

How are you using Jolokia for monitoring kafka cluster??

Thanks in advance

On Fri, Dec 8, 2017 at 3:10 PM, Michal Michalski <
michal.michal...@zalando.ie> wrote:

> Hi Irtiza,
>
> I don't have any tutorial, but I can tell you what we do :-)
>
> First of all we have Jolokia agent jar included in our Kafka Docker image.
> Then we append this to KAFKA_OPTS
>
> -javaagent:/jolokia-jvm-agent.jar=port=8074,host=0.0.0.0
>
>
> Relevant port is then "exposed" in Docker image and "allowed" in AWS
> Security Group.
> Then our monitoring tool is querying the following endpoins (just a HTTP
> GET query) these endpoints to get a list of all the metrics we need:
>
> :8074/jolokia/read/kafka.server:*
> :8074/jolokia/read/kafka.controller:*
> :8074/jolokia/read/kafka.log:*
> :8074/jolokia/read/kafka.network:*
> :8074/jolokia/read/java.lang:type=Memory
>
>
> And that's it, we get a nice JSON that we can use the way we want :-)
> I hope I didn't miss anything, but this should be it.
>
> M.
>
>
> On 8 December 2017 at 09:28, Irtiza Ali  wrote:
>
> > Hello Michal,
> >
> > Can you send me link to tutorial or provide some resources for the
> Jolokia
> > configuration with kafka>
> >
> >
> > Thank you
> > Irtiza
> >
> > On Wed, Dec 6, 2017 at 8:00 PM, Michal Michalski <
> > michal.michal...@zalando.ie> wrote:
> >
> > > Hi Irtiza,
> > >
> > > We're using Jolokia and we had no problems with it.
> > > It would be useful to know what exactly you did (how you "plugged in"
> > > Jolokia, how you configured it, what endpoint are you querying etc.) to
> > > help you.
> > >
> > > On 6 December 2017 at 10:36, Irtiza Ali  wrote:
> > >
> > > > Hello everyone,
> > > >
> > > > I am working python based Kafka monitoring application. I am unable
> to
> > > > figure out how to retrieve the metrics using Jolokia. I have enable
> the
> > > > port for metrics retrieval to .
> > > >
> > > > I Have two questions
> > > >
> > > > 1) Is there something that I am not doing correctly.
> > > > 2) Is there some other way to do it.
> > > >
> > > > With Regards
> > > > Irtiza Alli
> > > >
> > >
> >
>


Re: Running Kafka 1.0 binaries with inter.broker.protocol.version = 0.10

2017-12-08 Thread Debraj Manna
Can anyone let me know if I set both inter.broker.protocol.version &
log.message.format.version to 0.10 with the updated 1.0 binaries ? How are
the Kafka brokers supposed to behave?

On Thu, Dec 7, 2017 at 5:10 PM, Debraj Manna 
wrote:

> Hi
>
> Anyone any thoughts on my last query?
>
>
> On Wed, Dec 6, 2017 at 11:09 PM, Debraj Manna 
> wrote:
>
>> Thanks Manikumar for replying. One more query regarding your first reply
>>
>> What if I set both inter.broker.protocol.version & log.message.format.version
>> to 0.10 and update the binaries? How is Kafka supposed to behave & what we
>> are going to miss?
>>
>> On Wed, Dec 6, 2017 at 12:34 PM, Manikumar 
>> wrote:
>>
>>> Hi,
>>>
>>> 1. inter.broker.protocol.version should be higher than or equal to
>>> log.message.format.version.
>>> So with 0.10 inter.broker.protocol.version, we can not use latest message
>>> format and broker wont start.
>>>
>>> 2. Since other brokers in the cluster don't understand latest protocol,
>>> we
>>> can not directly
>>> set inter.broker.protocol.version = 1.0 and restart the broker. In first
>>> restart, we will update the binaries
>>> and in second restart we will change the protocol.
>>>
>>> we should follow the steps given in the docs.
>>>
>>> On Wed, Dec 6, 2017 at 11:21 AM, Debraj Manna 
>>> wrote:
>>>
>>> > Hi
>>> >
>>> > Anyone any thoughts?
>>> >
>>> >
>>> >
>>> > On Tue, Dec 5, 2017 at 8:38 PM, Debraj Manna >> >
>>> > wrote:
>>> >
>>> > > Hi
>>> > >
>>> > > Regarding  the Kafka Rolling Upgrade steps as mentioned in the doc
>>> > > 
>>> > >
>>> > > Can you let me know how is Kafka supposed to behave if the binaries
>>> are
>>> > > upgraded to the latest 1.0 but inter.broker.protocol.version still
>>> points
>>> > > to 0.10 in all the brokers? What features will I be missing in Kafka
>>> 1.0
>>> > > and what problem I am expected to behave?
>>> > >
>>> > > Also can you let me know in rolling upgrade (from 0.10 to 1.0) if I
>>> > follow
>>> > > the below steps how are Kafka supposed to behave
>>> > >
>>> > >
>>> > >1. Add inter.broker.protocol.version = 1.0 in a broker update the
>>> > >binary and restart it.
>>> > >2. Then go to the other brokers one by one and repeat the above
>>> steps
>>> > >
>>> > >
>>> >
>>>
>>
>>
>


Re: Kafka Consumer Committing Offset Even After Re-Assignment

2017-12-08 Thread Saïd Bouras
Hi Praveen,

I don't know if you that's your case but if you know that consumers will
lost ownership of partitions, you have to use the
*ConsumerRebalanceListener* to drop the last offset of record processed in
a clean way.

If you don't do that, the rebalance will start when the GroupCoordinator
stopped to receive heartbeats from the Consumer.
I don't know if that will solve your problem, but the consumer after
commiting his last offset will leave the ConsumerGroup.

Regards


On Thu, Dec 7, 2017 at 9:58 PM Praveen  wrote:

> I have 4 consumers on 2 boxes (running two consumers each) and 16
> partitions. Each consumer takes 4 partitions.
>
> In Kafka 0.9.0.1, I'm noticing that even when a consumer is no longer
> assigned the partition, it is able to commit offset to it.
>
> *Box 1 Started*
> t1 - Box 1, Consumer 1 - Owns 8 partitions
>   Box 1, Consumer 2 - Owns 8 partitions
>
>   Consumers start polling and are submitting tasks to a task pool for
> processing.
>
> *Box 2 Started*
> t2 - Box 1, Consumer 1 - Owns 4 partitions
>   Box 1, Consumer 2 - Owns 4 partitions
>   Box 2, Consumer 1 - Owns 4 partitions
>   Box 2, Consumer 2 - Owns 4 partitions
>
>   Partition-1 is now reassigned to Box 2, Consumer 1.
>   But Box 1, Consumer 1 already submitted some of the records for
> processing when it owned the partition earlier.
>
> t3 - Box 1, Consumer 1 - After the tasks finish executing, even tho it
> longer owns the partition, it is still able to commit the offset
>
> t4 - Box 2, Consumer 1 - Commits offsets as well, overwriting offset
> committed by Box 1, Consumer 1.
>
> Is this expected? Should I be using the ConsumerRebalanceListener to
> prevent commits to partitions not owned by the consumer?
>
> - Praveen
>
-- 

Saïd BOURAS

Consultant Big Data
Mobile: 0662988731
Zenika Paris
10 rue de Milan 75009 Paris
Standard : +33(0)1 45 26 19 15 <+33(0)145261915> - Fax : +33(0)1 72 70 45 10
<+33(0)172704510>


Re: Kafka Monitoring

2017-12-08 Thread Michal Michalski
Hi Irtiza,

I don't have any tutorial, but I can tell you what we do :-)

First of all we have Jolokia agent jar included in our Kafka Docker image.
Then we append this to KAFKA_OPTS

-javaagent:/jolokia-jvm-agent.jar=port=8074,host=0.0.0.0


Relevant port is then "exposed" in Docker image and "allowed" in AWS
Security Group.
Then our monitoring tool is querying the following endpoins (just a HTTP
GET query) these endpoints to get a list of all the metrics we need:

:8074/jolokia/read/kafka.server:*
:8074/jolokia/read/kafka.controller:*
:8074/jolokia/read/kafka.log:*
:8074/jolokia/read/kafka.network:*
:8074/jolokia/read/java.lang:type=Memory


And that's it, we get a nice JSON that we can use the way we want :-)
I hope I didn't miss anything, but this should be it.

M.


On 8 December 2017 at 09:28, Irtiza Ali  wrote:

> Hello Michal,
>
> Can you send me link to tutorial or provide some resources for the Jolokia
> configuration with kafka>
>
>
> Thank you
> Irtiza
>
> On Wed, Dec 6, 2017 at 8:00 PM, Michal Michalski <
> michal.michal...@zalando.ie> wrote:
>
> > Hi Irtiza,
> >
> > We're using Jolokia and we had no problems with it.
> > It would be useful to know what exactly you did (how you "plugged in"
> > Jolokia, how you configured it, what endpoint are you querying etc.) to
> > help you.
> >
> > On 6 December 2017 at 10:36, Irtiza Ali  wrote:
> >
> > > Hello everyone,
> > >
> > > I am working python based Kafka monitoring application. I am unable to
> > > figure out how to retrieve the metrics using Jolokia. I have enable the
> > > port for metrics retrieval to .
> > >
> > > I Have two questions
> > >
> > > 1) Is there something that I am not doing correctly.
> > > 2) Is there some other way to do it.
> > >
> > > With Regards
> > > Irtiza Alli
> > >
> >
>


Re: Kafka Monitoring

2017-12-08 Thread Irtiza Ali
Hello Michal,

Can you send me link to tutorial or provide some resources for the Jolokia
configuration with kafka>


Thank you
Irtiza

On Wed, Dec 6, 2017 at 8:00 PM, Michal Michalski <
michal.michal...@zalando.ie> wrote:

> Hi Irtiza,
>
> We're using Jolokia and we had no problems with it.
> It would be useful to know what exactly you did (how you "plugged in"
> Jolokia, how you configured it, what endpoint are you querying etc.) to
> help you.
>
> On 6 December 2017 at 10:36, Irtiza Ali  wrote:
>
> > Hello everyone,
> >
> > I am working python based Kafka monitoring application. I am unable to
> > figure out how to retrieve the metrics using Jolokia. I have enable the
> > port for metrics retrieval to .
> >
> > I Have two questions
> >
> > 1) Is there something that I am not doing correctly.
> > 2) Is there some other way to do it.
> >
> > With Regards
> > Irtiza Alli
> >
>


Re: Kafka Monitoring

2017-12-08 Thread Irtiza Ali
thank you subhash. I will check it out

On Wed, Dec 6, 2017 at 5:43 PM, Subhash Sriram 
wrote:

> Hi Irtiza,
>
> Have you looked at jmxtrans? It has multiple output writers for the
> metrics and one of them is the KeyOutWriter which just writes to disk.
>
> https://github.com/jmxtrans/jmxtrans/wiki
>
> Hope that helps!
>
> Thanks,
> Subhash
>
> Sent from my iPhone
>
> > On Dec 6, 2017, at 5:36 AM, Irtiza Ali  wrote:
> >
> > Hello everyone,
> >
> > I am working python based Kafka monitoring application. I am unable to
> > figure out how to retrieve the metrics using Jolokia. I have enable the
> > port for metrics retrieval to .
> >
> > I Have two questions
> >
> > 1) Is there something that I am not doing correctly.
> > 2) Is there some other way to do it.
> >
> > With Regards
> > Irtiza Alli
>