Re: Metadata Refresh and TimeoutException when MAX_BLOCK_MS_CONFIG set 0

2022-10-29 Thread Bhavesh Mistry
refresh is not available. I hope there is enough interest to make this producer broker down vs metadata not available. Thanks, Bhavesh On Mon, Oct 10, 2022 at 4:04 PM Bhavesh Mistry wrote: > Hi Luke, > > Thanks for the pointers. > > Sorry for being late I was out. >

Re: Metadata Refresh and TimeoutException when MAX_BLOCK_MS_CONFIG set 0

2022-10-10 Thread Bhavesh Mistry
this idea? > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-286%3A+producer.send%28%29+should+not+block+on+metadata+update > > Thank you. > Luke > > On Sat, Sep 24, 2022 at 6:36 AM Bhavesh Mistry > > wrote: > > > Hello Kafka Team, > > > > I would

Re: Metadata Refresh and TimeoutException when MAX_BLOCK_MS_CONFIG set 0

2022-09-23 Thread Bhavesh Mistry
Hello Kafka Team, I would appreciate any insight into how to distinguish between Brocker Down vs Metadata Refresh not available due to timing issues. Thanks, Bhavesh On Mon, Sep 19, 2022 at 12:50 PM Bhavesh Mistry wrote: > Hello Kafka Team, > > > > We have an environment whe

Metadata Refresh and TimeoutException when MAX_BLOCK_MS_CONFIG set 0

2022-09-19 Thread Bhavesh Mistry
Hello Kafka Team, We have an environment where Kafka Broker can go down for whatever reason. Hence, we had configured MAX_BLOCK_MS_CONFIG=0 because we wanted to drop messages when brokers were NOT available. Now the issue is we get data loss due to METADATA not being available and get

Re: Authorization Engine For Kafka Related to KPI-11

2015-11-03 Thread Bhavesh Mistry
On Sun, Nov 1, 2015 at 11:15 PM, Bhavesh Mistry <mistry.p.bhav...@gmail.com> wrote: > HI All, > > Have any one used Apache Ranger as Authorization Engine for Kafka Topic > creation, consumption (read) and write operation on a topic. I am looking > at having audit log and r

Re: Authorization Engine For Kafka Related to KPI-11

2015-11-03 Thread Bhavesh Mistry
+ Kafka Dev team to see if Kafka Dev team know or recommend any Auth engine for Producers/Consumers. Thanks, Bhavesh Please pardon me, I accidentally send previous blank email. On Tue, Nov 3, 2015 at 9:52 PM, Bhavesh Mistry <mistry.p.bhav...@gmail.com> wrote: > On Sun, Nov 1, 2015 at

Authorization Engine For Kafka Related to KPI-11

2015-11-01 Thread Bhavesh Mistry
HI All, Have any one used Apache Ranger as Authorization Engine for Kafka Topic creation, consumption (read) and write operation on a topic. I am looking at having audit log and regulating consumption/ write to particular topic (for example, having

Re: New Consumer API and Range Consumption with Fail-over

2015-08-04 Thread Bhavesh Mistry
! Jason On Thu, Jul 30, 2015 at 7:54 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hello Kafka Dev Team, With new Consumer API redesign ( https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java

New Consumer API and Range Consumption with Fail-over

2015-07-30 Thread Bhavesh Mistry
Hello Kafka Dev Team, With new Consumer API redesign ( https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java ), is there a capability to consume given the topic and partition start/ end position. How would I achieve following

Mirror Maker Copy Data over Public Network

2015-07-02 Thread Bhavesh Mistry
Hi All, Does anyone have set-up mirror maker to copy data over target cluster via SSH tunnel or ipsec for encrypting data ? I cannot use application encryption because existing consumer expect non-encrypted data (already running in prod). Any feedback SSH tunnel vs Ipsec or any issues you

Re: querying messages based on timestamps

2015-06-30 Thread Bhavesh Mistry
We had similar requirement to re-load the data based on timestamp (range between 1PM to 2PM) etc. We store the relationship between timestamp and largest offset number in Time Series Database using jmxtrans (LogEndOffset JMX bean vs current time.). You can setup polling interval to be 60 minutes

Re: At-least-once guarantees with high-level consumer

2015-06-18 Thread Bhavesh Mistry
HI Carl, Produce side retry can produce duplicated message being sent to brokers with different offset with same message. Also, you may get duplicated when the High Level Consumer offset is not being saved or commit but you have processed data and your server restart etc... To guaranteed

Re: How Producer handles Network Connectivity Issues

2015-05-29 Thread Bhavesh Mistry
Hi Kamal, In order to monitor each instance of producer, you will need to have alternative network monitoring channel (e.g Flume or Another Kafka Cluster for just monitoring a producers at large scale). Here is detail: 1) Add Custom Appender for Log4J and intercept all logs of Kafka Producer

Re: Is there a way to know when I've reached the end of a partition (consumed all messages) when using the high-level consumer?

2015-05-11 Thread Bhavesh Mistry
I have used what Gwen has suggested but to avoid false positive: While consuming records keep track of *last* consumed offset and compare with latest offset on broker for consumed topic when you get TimeOut Exception for that particular partition for given topic (e.g JMX Bean *LogEndOffset *for

Re: Kafka Monitoring using JMX

2015-04-20 Thread Bhavesh Mistry
You can use this https://github.com/Stackdriver/jmxtrans-config-stackdriver/blob/master/jmxtrans/stackdriver/json-specify-instance/kafka.json as an example of how Mbean are named and how is being escaped with \, and just use different output writer for any thing prior to 0.8.2.1 version. After

Producer Behavior When one or more Brokers' Disk is Full.

2015-03-25 Thread Bhavesh Mistry
Hello Kafka Community, What is expected behavior on Producer side when one or more Brokers’ disk is full, but have not reached retention period for topics (by size or by time limit). Does producer send data to that particular brokers and/or Producer Queue gets full and always throws Queue

Re: integrate Camus and Hive?

2015-03-11 Thread Bhavesh Mistry
messages' timestamps? On Mar 11, 2015, at 11:29, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI Yang, We do this today camus to hive (without the Avro) just plain old tab separated log line. We use the hive -f command to add dynamic partition to hive table: Bash Shell

Re: integrate Camus and Hive?

2015-03-11 Thread Bhavesh Mistry
On Wed, Mar 11, 2015 at 10:42 AM, Andrew Otto ao...@wikimedia.org wrote: Thanks, Do you have this partitioner implemented? Perhaps it would be good to try to get this into Camus as a build in option. HivePartitioner? :) -Ao On Mar 11, 2015, at 13:11, Bhavesh Mistry mistry.p.bhav

Re: Camus Issue about Output File EOF Issue

2015-03-04 Thread Bhavesh Mistry
-issues-end-of-file-javaioioexception-file-i.html for more info. We will have to patch Camus to copy to tmp directory then move to final destination as work around for now to make rename or file rename a more reliable. Thanks, Bhavesh On Monday, March 2, 2015, Bhavesh Mistry mistry.p.bhav

Camus Issue about Output File EOF Issue

2015-03-02 Thread Bhavesh Mistry
Hi Kakfa User Team, I have been encountering two issues with Camus Kafka ETL Job: 1) End Of File (unclosed files) 2) Not SequenceFile Error The details of issues can be found at https://groups.google.com/forum/#!topic/camus_etl/RHS3ASy7Eqc. If you guys have faced similar issue, please let me

Re: Camus Issue about Output File EOF Issue

2015-03-02 Thread Bhavesh Mistry
file is on maprfs - you may want to check with your vendor... I doubt Camus was extensively tested on that particular FS. On Mon, Mar 2, 2015 at 3:59 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kakfa User Team, I have been encountering two issues with Camus Kafka ETL Job

Re: Camus Issue about Output File EOF Issue

2015-03-02 Thread Bhavesh Mistry
shows that its trying to read a TEXT file as if it was Seq. Thats why I suspected a misconfiguration of some sort. Why do you suspect a race condition? On Mon, Mar 2, 2015 at 5:19 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Gwen, We are using MapR (Sorry no Cloudera

Re: After Leadership Election and kafka.log JMXBean Registration Process

2015-02-24 Thread Bhavesh Mistry
HI Jun, Thanks for info. Thanks, Bhavesh On Tue, Feb 24, 2015 at 2:45 PM, Jun Rao j...@confluent.io wrote: These two metrics are always registered, whether the replica is the leader or the follower. Thanks, Jun On Mon, Feb 23, 2015 at 6:40 PM, Bhavesh Mistry mistry.p.bhav

After Leadership Election and kafka.log JMXBean Registration Process

2015-02-23 Thread Bhavesh Mistry
Hi Kafka Team or User Community , After leadership election or switch between follower/leader of partition for given topic, does following metrics JMX bean gets registered (on leader) and de-registered (on follower). LogEndOffset Size LogStartOffset eg: kafka.log:type=Log,name=TOPIC-17-*Size*

Re: Get Latest Offset for Specific Topic for All Partition

2015-02-09 Thread Bhavesh Mistry
Shapira gshap...@cloudera.com wrote: You can use the metrics Kafka publishes. I think the relevant metrics are: Log.LogEndOffset Log.LogStartOffset Log.size Gwen On Thu, Feb 5, 2015 at 11:54 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI All, I just need to get the latest

Re: Kafka New(Java) Producer Connection reset by peer error and LB

2015-02-09 Thread Bhavesh Mistry
HI Kafka Team, Please confirm if you would like to open Jira issue to track this ? Thanks, Bhavesh On Mon, Feb 9, 2015 at 12:39 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kakfa Team, We are getting this connection reset by pears after couple of minute aster start-up

Get Latest Offset for Specific Topic for All Partition

2015-02-05 Thread Bhavesh Mistry
HI All, I just need to get the latest offset # for topic (not for consumer group). Which API to get this info ? My use case is to analyze the data injection rate to each of partition is uniform or not (close). For this, I am planing to dump the latest offset into graphite for each partition

Re: Kafka ETL Camus Question

2015-02-03 Thread Bhavesh Mistry
to comment regarding speculative execution. We have it disabled at the cluster level and typically don't need it for most of our jobs. Especially with something like Camus, I don't see any need to run parallel copies of the same task. On Mon, Feb 2, 2015 at 10:36 PM, Bhavesh Mistry

Re: Kafka ETL Camus Question

2015-02-02 Thread Bhavesh Mistry
Hi Jun, Thanks for info. I did not get answer to my question there so I thought I try my luck here :) Thanks, Bhavesh On Mon, Feb 2, 2015 at 9:46 PM, Jun Rao j...@confluent.io wrote: You can probably ask the Camus mailing list. Thanks, Jun On Thu, Jan 29, 2015 at 1:59 PM, Bhavesh

Kafka ETL Camus Question

2015-01-29 Thread Bhavesh Mistry
Hi Kafka Team or Linked-In Team, I would like to know if you guys run Camus ETL job with speculative execution true or false. Does it make sense to set this to false ? Having true, it creates additional load on brokers for each map task (create a map task to pull same partition twice). Is

Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 2 (with the correct links)

2015-01-26 Thread Bhavesh Mistry
Hi Kafka Team, I just wanted to bring this to your attention regarding Java New Producer limitation compare to old producer. a) Partition Increasing is limited to configured memory allocation. buffer.memory batch.size The maximum partition you could have before impacting (New Java Producers)

Counting # of Message Brokers Receive Per Minute Per Topic

2015-01-22 Thread Bhavesh Mistry
Hi Kafka Team, I need to count message received by entire Kafka Broker Cluster for a particular topic. I have 3 brokers, so do I need to sum the COUNT metric or just one server count reflect all server count. It seems that count is always increasing (although metric name is *MessagesInPerSec*

Kafka Cluster Monitoring and Documentation of Internals (JMX Metrics) of Rejected Events

2015-01-12 Thread Bhavesh Mistry
Hi Kafka Team, I am trying to find out Kafka Internal and how a message can be corrupted or lost at brokers side. I have refer to following documentations for monitoring: https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Internals http://kafka.apache.org/documentation.html#monitoring I am

Latency Tracking Across All Kafka Component

2015-01-05 Thread Bhavesh Mistry
Hi Kafka Team/Users, We are using Linked-in Kafka data pipe-line end-to-end. Producer(s) -Local DC Brokers - MM - Central brokers - Camus Job - HDFS This is working out very well for us, but we need to have visibility of latency at each layer (Local DC Brokers - MM - Central brokers - Camus Job

Re: Kafka 0.8.2 new producer blocking on metadata

2014-12-29 Thread Bhavesh Mistry
Hi Paul, I have faced similar issue, which you have faced. Our use case was bit different and we needed to aggregate events and publish to same partition for same topic. Occasionally, I have run into blocked application threads (not because of metadata but sync block for each batch). When you

Re: [DISCUSSION] adding the serializer api back to the new java producer

2014-12-09 Thread Bhavesh Mistry
Hi All, This is very likely when you have large site such as Linked-in and you have thousand of servers producing data. You will mixed bag of producer and serialization or deserialization because of incremental code deployment. So, it is best to keep the API as generic as possible and each org

Re: [DISCUSSION] adding the serializer api back to the new java producer

2014-11-25 Thread Bhavesh Mistry
How will mix bag will work with Consumer side ? Entire site can not be rolled at once so Consumer will have to deals with New and Old Serialize Bytes ? This could be app team responsibility. Are you guys targeting 0.8.2 release, which may break customer who are already using new producer API

Re: How to recover from ConsumerRebalanceFailedException ?

2014-11-20 Thread Bhavesh Mistry
alternative proposal for this for new and old high-level consumer API. Thanks, Bhavesh On Tue, Nov 18, 2014 at 9:53 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Jun, ZK cluster are up and running. What is best way to programmatically recover and I would try to exponential recovery

Re: How to recover from ConsumerRebalanceFailedException ?

2014-11-18 Thread Bhavesh Mistry
/* dies thread dies, then restart the sources. Is there any alter approach or life cycle method that so api consumer can attached to Consumer life cycle that it is dying and get notified so we can take some action. Thanks, Bhavesh On Mon, Nov 17, 2014 at 2:30 PM, Bhavesh Mistry mistry.p.bhav

Re: How to recover from ConsumerRebalanceFailedException ?

2014-11-18 Thread Bhavesh Mistry
Rao jun...@gmail.com wrote: Is your ZK service alive at that point? If not, you just need to monitor the ZK server properly. Thanks, Jun On Mon, Nov 17, 2014 at 2:30 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka Team, I get following exception due to ZK/Network

How to recover from ConsumerRebalanceFailedException ?

2014-11-17 Thread Bhavesh Mistry
Hi Kafka Team, I get following exception due to ZK/Network issues intermittently. How do I recover from consumer thread dying *programmatically* and restart source because we have alerts that due to this error we have partition OWNERSHIP is *none* ? Please let me know how to restart source and

Re: Enforcing Network Bandwidth Quote with New Java Producer

2014-11-17 Thread Bhavesh Mistry
On Fri, Nov 14, 2014 at 10:34 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI Kafka Team, We like to enforce a network bandwidth quota limit per minute on producer side. How can I do this ? I need some way to count compressed bytes on producer ? I know there is callback does

Enforcing Network Bandwidth Quote with New Java Producer

2014-11-14 Thread Bhavesh Mistry
HI Kafka Team, We like to enforce a network bandwidth quota limit per minute on producer side. How can I do this ? I need some way to count compressed bytes on producer ? I know there is callback does not give this ability ? Let me know the best way. Thanks, Bhavesh

Re: Programmatic Kafka version detection/extraction?

2014-11-11 Thread Bhavesh Mistry
If is maven artifact then you will get following pre-build property file from maven build called pom.properties under /META-INF/maven/groupid/artifactId/pom.properties folder. Here is sample: #Generated by Maven #Mon Oct 10 10:44:31 EDT 2011 version=10.0.1 groupId=com.google.guava

Re: expanding cluster and reassigning parititions without restarting producer

2014-11-10 Thread Bhavesh Mistry
I had different experience with expanding partition for new producer and its impact. I only tried for non-key message.I would always advice to keep batch size relatively low or plan for expansion with new java producer in advance or since inception otherwise running producer code is impacted.

Re: Announcing Confluent

2014-11-06 Thread Bhavesh Mistry
HI Guys, Thanks for your awesome support. I wish you good luck !! Thanks for open sources Kafka !! Thanks, Bhavesh On Thu, Nov 6, 2014 at 10:52 AM, Rajasekar Elango rela...@salesforce.com wrote: Congrats. Wish you all the very best and success. Thanks, Raja. On Thu, Nov 6, 2014 at

queued.max.message.chunks impact and consumer tuning

2014-11-04 Thread Bhavesh Mistry
Hi Kafka Dev Team, It seems that Maximum buffer size is set to 2 default. What is impact of changing this to 2000 or so ? This will improve the consumer thread performance ? More event will be buffered in memory. Or Is there any other recommendation to tune High Level Consumers ? Here is

Re: High Level Consumer Iterator IllegalStateException Issue

2014-11-04 Thread Bhavesh Mistry
at 4:35 PM, Jun Rao jun...@gmail.com wrote: Bhavesh, That example has a lot of code. Could you provide a simpler test that demonstrates the problem? Thanks, Jun On Fri, Oct 31, 2014 at 10:07 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Jun, Here is code base

Re: queued.max.message.chunks impact and consumer tuning

2014-11-04 Thread Bhavesh Mistry
, but two should be sufficient. There is little reason to buffer more than that. If you increase it to 2000 you will most likely run into memory issues. E.g., if your fetch size is 1MB you would enqueue 1MB*2000 chunks in each queue. On Tue, Nov 04, 2014 at 09:05:44AM -0800, Bhavesh Mistry wrote

Re: Spark Kafka Performance

2014-11-04 Thread Bhavesh Mistry
Hi Eduardo, Can you please take thread dump and see if there are blocking issues on producer side ? Do you have single instance of Producers and Multiple treads ? Are you using Scala Producer or New Java Producer ? Also, what is your producer property ? Thanks, Bhavesh On Tue, Nov 4, 2014

Re: High Level Consumer Iterator IllegalStateException Issue

2014-10-31 Thread Bhavesh Mistry
...@gmail.com wrote: Do you have a simple test that can reproduce this issue? Thanks, Jun On Thu, Oct 30, 2014 at 8:34 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI Jun, Consumer Connector is not closed because I can see the ConsumerFetcher Thread alive but Blocked on *put

Re: partitions stealing balancing consumer threads across servers

2014-10-30 Thread Bhavesh Mistry
Hi Joel, I have similar issue. I have tried *partition.assignment.strategy=* *roundrobin*, but how do you accept this accept to work ? We have a topic with 32 partitions and 4 JVM with 10 threads each ( 8 is backup if one of JVM goes down). The roundrobin does not select all the JVM only 3

Re: partitions stealing balancing consumer threads across servers

2014-10-30 Thread Bhavesh Mistry
HI Joel, Correction to my previous question: What is expected behavior of *roundrobin *policy above scenario ? Thanks, Bhavesh On Thu, Oct 30, 2014 at 1:39 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Joel, I have similar issue. I have tried *partition.assignment.strategy

Re: partitions stealing balancing consumer threads across servers

2014-10-30 Thread Bhavesh Mistry
Hi Joel, Yes, I am on Kafka Trunk branch. In my scenario, if you have back-up threads does that impact the allocation. If I have 24 threads (6 thread for each JVM total of 4 JVMS) in above example , does partition allocation gets evenly distributed (3 on each JVM) ? is this supported use case

Re: High Level Consumer Iterator IllegalStateException Issue

2014-10-30 Thread Bhavesh Mistry
is that the consumer connector is already closed and then you call hasNext() on the iterator. Thanks, Jun On Wed, Oct 29, 2014 at 9:06 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Jun, The hasNext() itself throws this error. I have to manually reset state and sometime

Re: High Level Consumer Iterator IllegalStateException Issue

2014-10-29 Thread Bhavesh Mistry
hasNext() on the iterator. Thanks, Jun On Tue, Oct 28, 2014 at 10:50 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Neha, Thanks for your answer. Can you please let me know how I can resolve the Iterator IllegalStateException ? I would appreciate your is this is bug I

Re: High Level Consumer Iterator IllegalStateException Issue

2014-10-28 Thread Bhavesh Mistry
...@gmail.com wrote: queued.max.message.chunks controls the consumer's fetcher queue. On Mon, Oct 27, 2014 at 9:32 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI Neha, If I solved the problem number 1 think and number 2 will be solved (prob 1 is causing problem number 2(blocked

High Level Consumer and Close with Auto Commit On

2014-10-28 Thread Bhavesh Mistry
Hi Kafka Team, What is expected behavior when you close *ConsumerConnector* and auto commit is on ? Basically, when auto commit interval is set to 5 seconds and shutdown is called (before 5 seconds elapses) does ConsumerConnector commit the offset of message consumed by (next()) method or

Re: High Level Consumer and Close with Auto Commit On

2014-10-28 Thread Bhavesh Mistry
at ZookeeperConsumerConnector.scala (currently the only implementation of ConsumerConnector) you'll see shutdown() includes the following: if (config.autoCommitEnable) commitOffsets() Gwen On Tue, Oct 28, 2014 at 11:44 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka Team

Re: High Level Consumer Iterator IllegalStateException Issue

2014-10-27 Thread Bhavesh Mistry
) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60) Thanks, Bhavesh On Sun, Oct 26, 2014 at 3:14 PM, Neha Narkhede neha.narkh...@gmail.com wrote: Can you provide the steps to reproduce this issue? On Fri, Oct 24, 2014 at 6:11 PM, Bhavesh Mistry mistry.p.bhav

Re: High Level Consumer Iterator IllegalStateException Issue

2014-10-27 Thread Bhavesh Mistry
fill up if your consumer thread dies or slows down. I'd recommend you ensure that all your consumer threads are alive. You can take a thread dump to verify this. Thanks, Neha On Mon, Oct 27, 2014 at 2:14 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Neha, I have two

Where Compression/Decompression happens

2014-10-27 Thread Bhavesh Mistry
Hi Kafka Team, Is Compression happening on Producer Side (on application thread meaning thread that call send method or background Kafka thread ) and where does decompression Consumer side ? Is there any Compression/Decompression happening on Brokers Side when receiving message from producer

Re: High Level Consumer Iterator IllegalStateException Issue

2014-10-24 Thread Bhavesh Mistry
I am using one from the Kafka Trunk branch. Thanks, Bhavesh On Fri, Oct 24, 2014 at 5:24 PM, Neha Narkhede neha.narkh...@gmail.com wrote: Which version of Kafka are you using on the consumer? On Fri, Oct 24, 2014 at 4:14 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI Kafka

Re: Sending Same Message to Two Topics on Same Broker Cluster

2014-10-21 Thread Bhavesh Mistry
:17 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Neha, Yes, I understand that but when transmitting single message (I can not set List of all topics) Only Single one. So I will to add same message in buffer with different topic. If Kakfa protocol, allows to add multiple

Re: Sending Same Message to Two Topics on Same Broker Cluster

2014-10-20 Thread Bhavesh Mistry
PM, Neha Narkhede neha.narkh...@gmail.com wrote: Not really. You need producers to send data to Kafka. On Mon, Oct 20, 2014 at 9:05 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kakfa Team, I would like to send a single message to multiple topics (two for now) without re

Auto Purging Consumer Group Configuration [Especially Kafka Console Group]

2014-10-09 Thread Bhavesh Mistry
Hi Kafka, We have lots of lingering console consumer group people have created for testing or debugging purpose for one time use via bin/kafka-console-consumer.sh. Is there auto purging that clean script that Kafka provide ? Is three any API to find out inactive Consumer group and delete

Re: [Java New Producer] CPU Usage Spike to 100% when network connection is lost

2014-09-18 Thread Bhavesh Mistry
are running we did fix several bugs similar to this against trunk. -Jay On Wed, Sep 17, 2014 at 2:14 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka Dev team, I see my CPU spike to 100% when network connection is lost for while. It seems network IO thread are very busy logging

Re: MBeans, dashes, underscores, and KAFKA-1481

2014-09-17 Thread Bhavesh Mistry
by '|'. Thanks, Jun On Tue, Sep 16, 2014 at 5:15 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI Otis, What is migration path ? If topic with special chars exists already( .,-,| etc) in previous version of producer/consumer of Kafka, what happens after the upgrade new

[Java New Producer] CPU Usage Spike to 100% when network connection is lost

2014-09-17 Thread Bhavesh Mistry
Hi Kafka Dev team, I see my CPU spike to 100% when network connection is lost for while. It seems network IO thread are very busy logging following error message. Is this expected behavior ? 2014-09-17 14:06:16.830 [kafka-producer-network-thread] ERROR

Re: MBeans, dashes, underscores, and KAFKA-1481

2014-09-16 Thread Bhavesh Mistry
HI Otis, What is migration path ? If topic with special chars exists already( .,-,| etc) in previous version of producer/consumer of Kafka, what happens after the upgrade new producer or consumer (kafka version) ? Also, in new producer API (Kafka Trunk), does this enforce the rule about client

Re: Need Document and Explanation Of New Metrics Name in New Java Producer on Kafka Trunk

2014-09-15 Thread Bhavesh Mistry
allow . in the topic name. Topic name can be alpha-numeric plus - and _. Thanks, Jun On Tue, Sep 9, 2014 at 6:29 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Thanks, I was using without JMX. I will go through doc. But how about Topic or Metric name Topic Name

[Java New Producer Configuration] Maximum time spent in Queue in Async mode

2014-09-11 Thread Bhavesh Mistry
Hi Kafka team, How do I configure a max amount a message spend in Queue ? In old producer, there is property called queue.buffering.max.ms and it is not present in new one. Basically, if I just send one message that is less than batch size, what is amount of time message will be in local

Re: message size limit

2014-09-10 Thread Bhavesh Mistry
will be in its own batch. Then, only one message will be rejected by the broker. Thanks, Jun On Tue, Sep 9, 2014 at 5:51 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Jun, Is there any plug-ability that Developer can customize batching logic or inject custom code

Re: message size limit

2014-09-09 Thread Bhavesh Mistry
, message.max.bytes applies to the compressed size of a batch of messages. Otherwise, message.max.bytes applies to the size of each individual message. Thanks, Jun On Wed, Sep 3, 2014 at 3:25 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: I am referring to wiki http

Need Document and Explanation Of New Metrics Name in New Java Producer on Kafka Trunk

2014-09-09 Thread Bhavesh Mistry
Kafka Team, Can you please let me know what each of following Metrics means ? Some of them are obvious, but some are hard to understand. My Topic name is *TOPIC_NAME*. can we enforce a Topic Name Convention or Metric Name Convention. Because in previous version of Kafka, we have similar

Re: message size limit

2014-09-09 Thread Bhavesh Mistry
On Tue, Sep 9, 2014 at 12:59 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: HI Jun, Thanks for clarification. Follow up questions, does new producer solve the issues highlight. In event of compression and async mode in new producer, will it break down messages

Re: Need Document and Explanation Of New Metrics Name in New Java Producer on Kafka Trunk

2014-09-09 Thread Bhavesh Mistry
...@gmail.com wrote: Hi Bhavesh, Each of those JMX attributes comes with documentation. If you open up jconsole and attach to a jvm running the consumer you should be able to read the descriptions for each attribute. -Jay On Tue, Sep 9, 2014 at 2:07 PM, Bhavesh Mistry mistry.p.bhav

Re: message size limit

2014-09-03 Thread Bhavesh Mistry
Hi Jun, We have similar problem. We have variable length of messages. So when we have fixed size of Batch sometime the batch exceed the limit set on the brokers (2MB). So can Producer have some extra logic to determine the optimal batch size by looking at configured message.max.bytes value.

Re: message size limit

2014-09-03 Thread Bhavesh Mistry
message not per batch. is there another limit I should be aware of? thanks On Wed, Sep 3, 2014 at 2:07 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Jun, We have similar problem. We have variable length of messages. So when we have fixed size of Batch sometime

Re: High Level Consumer and Commit

2014-09-03 Thread Bhavesh Mistry
On Tuesday, September 2, 2014 5:43 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Philip, Yes, We have disabled auto commit but, we need to be able to read from particular offset if we manage the offset ourself in some storage(DB). High Level consumer does not allow per

High Level Consumer and Commit

2014-09-02 Thread Bhavesh Mistry
Hi Kafka Group, I have to pull the data from the Topic and index into Elastic Search with Bulk API and wanted to commit only batch that has been committed and still continue to read from topic further on same topic. I have auto commit to be off. ListMessage batch . while

Re: High Level Consumer and Commit

2014-09-02 Thread Bhavesh Mistry
://www.philipotoole.com On Tuesday, September 2, 2014 4:38 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka Group, I have to pull the data from the Topic and index into Elastic Search with Bulk API and wanted to commit only batch that has been committed and still continue

Re: High Level Consumer and Commit

2014-09-02 Thread Bhavesh Mistry
what you want if you disable auto-commit. I'm not sure what else you're asking. Philip - http://www.philipotoole.com On Tuesday, September 2, 2014 5:15 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Philip, Thanks for the update

Re: [New Feature Request] Ability to Inject Queue Implementation Async Mode

2014-08-11 Thread Bhavesh Mistry
://www.philipotoole.com On Aug 7, 2014, at 2:44 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Basically, requirement is to support message dropping policy in event when queue is full. When you get storm of data (in our case logging due to buggy application code), we would like to retain current

[Kafka MirrorMaker] Message with Custom Partition Logic

2014-08-11 Thread Bhavesh Mistry
HI Kafka Dev Team, We have to aggregate events (count) per DC and across DCs for one of topic. We have standard Linked-in data pipe line producers -- Local Brokers -- MM -- Center Brokers. So I would like to know How MM handles messages when custom partitioning logic is used as below and

Re: Uniform Distribution of Messages for Topic Across Partitions Without Effecting Performance

2014-08-07 Thread Bhavesh Mistry
at 9:12 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: How to achieve uniform distribution of non-keyed messages per topic across all partitions? We have tried to do this uniform distribution across partition using custom partitioning from each producer instance using round robing

Re: [New Feature Request] Ability to Inject Queue Implementation Async Mode

2014-08-07 Thread Bhavesh Mistry
/ On Mon, Aug 4, 2014 at 8:52 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Kafka Version: 0.8.x 1) Ability to define which messages get drooped (least recently instead of most recent in queue) 2) Try Unbounded Queue to find out the Upper Limit

Re: Monitoring Producers at Large Scale

2014-07-08 Thread Bhavesh Mistry
://sematext.com/ On Thu, Jun 26, 2014 at 3:09 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi All, Thanks for all your responses. JMX metrics are there and we do pull the metrics, but I would like to capture the logs from Kafka lib as well especially WARN, FATAL and ERROR

Re: Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)

2014-07-03 Thread Bhavesh Mistry
under core? Guozhang On Thu, Jul 3, 2014 at 10:52 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka Team, We are running multiple webapps in tomcat container, and we have producer which are managed by the ServletContextListener (Lifecycle). Upon contextInitialized we

Re: Monitoring Producers at Large Scale

2014-06-26 Thread Bhavesh Mistry
seems you will want to monitor monitor Exceptions eg Leader Not Found, Queue is full, resend fail etc are kafka cluster 2014-06-25 8:20 GMT+08:00 Bhavesh Mistry mistry.p.bhav...@gmail.com: We use Kafka as Transport Layer to transport application logs. How do we monitor Producers

Monitoring Producers at Large Scale

2014-06-24 Thread Bhavesh Mistry
We use Kafka as Transport Layer to transport application logs. How do we monitor Producers at large scales about 6000 boxes x 4 topic per box so roughly 24000 producers (spread across multiple data center.. we have brokers per DC). We do the monitoring based on logs. I have tried intercepting

Re: Kafka High Level Consumer Fail Over

2014-06-13 Thread Bhavesh Mistry
PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka Dev Team/ Users, We have high level consumer group consuming from 32 partitions for a topic. We have been running 48 consumers in this group across multiple servers. We have kept 16 as back-up consumers, and hoping

Kafka High Level Consumer Fail Over

2014-06-12 Thread Bhavesh Mistry
Hi Kafka Dev Team/ Users, We have high level consumer group consuming from 32 partitions for a topic. We have been running 48 consumers in this group across multiple servers. We have kept 16 as back-up consumers, and hoping when the consumer dies, meaning when Zookeeper does not have an

Re: Producer Side Metric Details

2014-05-29 Thread Bhavesh Mistry
:41 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka Group, I need to get following metrics from the producer side. I am able to get following metric using the ProducerTopicMetrics class per minute. messageRate byteRate droppedMessageRate I would like to know

Exception kafka.common.NotLeaderForPartitionException

2014-05-29 Thread Bhavesh Mistry
Hi Kafka User Group, We have recently changed the “message.max.bytes” to 2MB and rebooted all the brokers at once. After this, we are getting “kafka.common.NotLeaderForPartitionException” on both Producer side and consumer side. How do I fix this, I have restarted producer and consumer

Re: Exception kafka.common.NotLeaderForPartitionException

2014-05-29 Thread Bhavesh Mistry
you still see them after restarting the brokers for a while? Guozhang On Thu, May 29, 2014 at 6:02 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com wrote: Hi Kafka User Group, We have recently changed the “message.max.bytes” to 2MB and rebooted all the brokers at once. After

Producer Side Metric Details

2014-05-28 Thread Bhavesh Mistry
Hi Kafka Group, I need to get following metrics from the producer side. I am able to get following metric using the ProducerTopicMetrics class per minute. messageRate byteRate droppedMessageRate I would like to know how to get above metric per topic per partition. Also, how do I get count of

Topic Partitioning Strategy For Large Data

2014-05-23 Thread Bhavesh Mistry
Hi Kafka Users, We are trying to transport 4TB data per day on single topic. It is operation application logs.How do we estimate number of partitions and partitioning strategy? Our goal is to drain (from consumer side) from the Kafka Brokers as soon as messages arrive (keep the lag as

Max Message Size

2014-05-14 Thread Bhavesh Mistry
Hi Kafka Team, Is there any message size limitation from producer side ? If there, is what happens to message, does it get truncated or message is lost ? Thanks, Bhavesh

Re: QOS on Producer Side

2014-05-06 Thread Bhavesh Mistry
, May 5, 2014 at 10:03 PM, Bhavesh Mistry mistry.p.bhav...@gmail.comwrote: Thanks for answers. Does the callback get call on failure only or for success as well ? Also, how do I do this on Kafka 0.8.0 ? Is there any plan for adding buffering on disk for next version ? Also, when

  1   2   >