Re: Topics being automatically deleted?

2016-09-14 Thread Manikumar Reddy
looks like you have not changed the default data log directory. By default kafka is configured to store the data logs to /tmp/ folder. /tmp gets cleared on system reboots. change log.dirs config property to some other directory. On Thu, Sep 15, 2016 at 11:46 AM, Ali Akhtar wrote: > I've noticed

Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-06 Thread Manikumar Reddy
congrats, Jason! On Wed, Sep 7, 2016 at 9:28 AM, Ashish Singh wrote: > Congrats, Jason! > > On Tuesday, September 6, 2016, Jason Gustafson wrote: > > > Thanks all! > > > > On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin > > wrote: > > > > > Congrats, Jason! > > > > > > On Tue, Sep 6, 2016 at 5:09

Re: kafka-mirror-maker.sh ssl

2016-08-25 Thread Manikumar Reddy
s= XX:9092 > > > (I have tested producer and consumer over ssl and its working) > > > -Original Message- > From: Manikumar Reddy [mailto:manikumar.re...@gmail.com] > Sent: Thursday, August 25, 2016 3:45 PM > To: users@kafka.apache.org > Subject: Re: kafka-mirror-maker

Re: kafka-mirror-maker.sh ssl

2016-08-25 Thread Manikumar Reddy
Security is supported in new Consumer API. use "--new.consumer" option to use enable new consumer inside MirrorMaker. On Thu, Aug 25, 2016 at 6:08 PM, Erik Parienty wrote: > As I understand mirror-maker support consumer ssl > I tried to set it but I get WARN Property security.protocol is not va

Re: Understand producer metrics

2016-08-18 Thread Manikumar Reddy
This doc link may help: http://kafka.apache.org/documentation.html#new_producer_monitoring On Fri, Aug 19, 2016 at 2:36 AM, David Yu wrote: > Kafka users, > > I want to resurface this post since it becomes crucial for our team to > understand our recent Samza throughput issues we are facing. >

Re: [kafka-clients] [VOTE] 0.10.0.1 RC2

2016-08-05 Thread Manikumar Reddy
ka-0.10.0.1-rc2/RELEASE_NOTES.html >> >> When compared to RC1, RC2 contains a fix for a regression where an older >> version of slf4j-log4j12 was also being included in the libs folder of the >> binary tarball (KAFKA-4008). Thanks to Manikumar Reddy for reporting the >&g

Re: kakfa-console-consumer multiple topics

2016-08-04 Thread Manikumar Reddy
You can pass a pattern string using whitelist config option. ex: sh kafka-console-consumer.sh --new-consumer --bootstrap-server localhost:9092 --whitelist ".*" On Thu, Aug 4, 2016 at 7:14 PM, Tauzell, Dave wrote: > Is there a way to have the kafka-console-consumer read from multiple > topics? I

Re: Unable to write, leader not available

2016-08-03 Thread Manikumar Reddy
Hi, Can you enable Authorization debug logs and check for logs related to denied operations.. we should also enable operations on Cluster resource. Thanks, Manikumar On Thu, Aug 4, 2016 at 1:51 AM, Bryan Baugher wrote: > Hi everyone, > > I was trying out kerberos on Kafka 0.10.0.0 by creating

Re: [kafka-clients] [VOTE] 0.10.0.1 RC1

2016-08-03 Thread Manikumar Reddy
Hi, There are two versions of slf4j-log4j jar in the build. (1.6.1, 1.7.21). slf4j-log4j12-1.6.1.jar is coming from streams:examples module. Thanks, Manikumar On Tue, Aug 2, 2016 at 8:31 PM, Ismael Juma wrote: > Hello Kafka users, developers and client-developers, > > This is the second candid

Re: Topic not getting deleted on 0.8.2.1

2016-07-28 Thread Manikumar Reddy
many delete topic functionality related issues got fixed in latest versions. It highly recommend to move to latest version. https://issues.apache.org/jira/browse/KAFKA-1757 fixes similar issue on windows platform. On Thu, Jul 28, 2016 at 3:40 PM, Ghosh, Prabal Kumar < prabal.kumar.gh...@sap.com> w

Re: Synchronized block in StreamTask

2016-07-28 Thread Manikumar Reddy
You already got reply from Guozhang on dev mailing list. On Thu, Jul 28, 2016 at 7:09 AM, Pierre Coquentin < pierre.coquen...@gmail.com> wrote: > Hi, > > I've a simple technical question about kafka streams. > In class org.apache.kafka.streams.processor.internals.StreamTask, the > method "process

Re: Log retention not working

2016-07-27 Thread Manikumar Reddy
also check if any value set for log.retention.bytes broker config On Wed, Jul 27, 2016 at 8:03 PM, Samuel Taylor wrote: > Is it possible that your log directory is in /tmp/ and your OS is deleting > that directory? I know it's happened to me before. > > - Samuel > > On Jul 27, 2016 13:43, "David

Re: Consumer Offsets and Open FDs

2016-07-19 Thread Manikumar Reddy
rade b) backport the patch yourself. b) seems extremely risky to > me > > Thanks > > Tom > > On Tue, Jul 19, 2016 at 5:49 AM, Manikumar Reddy < > manikumar.re...@gmail.com> > wrote: > > > Try increasing log cleaner threads. > > > > On Tue, Jul

Re: Consumer Offsets and Open FDs

2016-07-18 Thread Manikumar Reddy
aner.scala:322) > >at > kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:230) > >at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:208) > >at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) > >[2016-06-24

Re: Enabling PLAINTEXT inter broker security

2016-07-15 Thread Manikumar Reddy
Hi, Which Kafka version you are using? SASL/PLAIN support is available from Kafka 0.10.0.0 release onwards. Thanks Manikumar On Fri, Jul 15, 2016 at 4:22 PM, cs user wrote: > Apologies, just to me clear, my broker settings are actually as below, > using PLAINTEXT throughout > > listeners=

Re: Consumer Offsets and Open FDs

2016-07-13 Thread Manikumar Reddy
aner) > > Using Visual VM, I do not see any log-cleaner threads in those brokers. I > do see it in the brokers not showing this behavior though. > > Any idea why the LogCleaner failed? > > As a temporary fix, should we restart the affected brokers? > > Thanks again! > &

Re: Consumer Offsets and Open FDs

2016-07-13 Thread Manikumar Reddy
Hi, Are you seeing any errors in log-cleaner.log? The log-cleaner thread can crash on certain errors. Thanks Manikumar On Wed, Jul 13, 2016 at 9:54 PM, Lawrence Weikum wrote: > Hello, > > We’re seeing a strange behavior in Kafka 0.9.0.1 which occurs about every > other week. I’m curious if o

Fwd: consumer.subscribe(Pattern p , ..) method fails with Authorizer

2016-07-08 Thread Manikumar Reddy
Hi, consumer.subscribe(Pattern p , ..) method implementation tries to get metadata of all the topics. This will throw TopicAuthorizationException on internal topics and other unauthorized topics. We may need to move the pattern matching to sever side. Is this know issue?. If not, I will raise JIR

Re: Log retention just for offset topic

2016-06-29 Thread Manikumar Reddy
Hi, Kafka internally creates the offsets topic (__consumer_offsets) with compact mode on. >From 0.9.0.1 onwards log.cleaner.enable=true by default. This means topics with a cleanup.policy=compact will now be compacted by default, You can tweak the offset topic configuration by using below props

Re: [DISCUSS] Java 8 as a minimum requirement

2016-06-17 Thread Manikumar Reddy
I agree with Harsha and Marcus. Many of the kafka users are still on java 7 and some of them definitely upgrade to newer versions. We may need to support for a while. We can remove the support from next major version onwards. Thanks, Manikumar On Fri, Jun 17, 2016 at 2:04 PM, Marcus Gründler wr

Re: Automatic Broker Id Generation

2016-05-20 Thread Manikumar Reddy
ght? > > Thanks > > On Thu, May 19, 2016 at 7:14 PM, Manikumar Reddy < > manikumar.re...@gmail.com> > wrote: > > > Auto broker id generation logic: > > 1. If there is a user provided broker.id, then it is used and id range > is > > from 0 to reser

Re: [COMMERCIAL] Re: [COMMERCIAL] Re: download - 0.10.0.0 RC6

2016-05-19 Thread Manikumar Reddy
Hi, commitId is nothing but latest git commit hash of the release. This is taken while building binary distribution. commitId is avilable in binary release (kafka_2.10-0.10.0.0.tgz) commitId will not be available if you build from source release (kafka-0.10.0.0-src.tgz). On Wed, May 18, 2016 at

Re: Automatic Broker Id Generation

2016-05-19 Thread Manikumar Reddy
Auto broker id generation logic: 1. If there is a user provided broker.id, then it is used and id range is from 0 to reserved.broker.max.id 2. If there is no user provided broker.id, then auto id generation starts from reserved.broker.max.id +1 3. broker.id is stored in meta.properties file under e

Re: client.id, v9 consumer, metrics, JMX and quotas

2016-05-11 Thread Manikumar Reddy
Hi, This is known issue. Check below links for related discussion https://issues.apache.org/jira/browse/KAFKA-3494 https://qnalist.com/questions/6420696/discuss-mbeans-overwritten-with-identical-clients-on-a-single-jvm Manikumar On Wed, May 11, 2016 at 7:29 PM, Paul Mackles wrote: > Hi > > >

Re: How to work around log compaction error (0.8.2.2)

2016-04-27 Thread Manikumar Reddy
Hi, Are you enabling log compaction on a topic with compressed messages? If yes, then that might be the reason for the exception. 0.8.2.2 Log Compaction does not support compressed messages. This got fixed in 0.9.0.0 (KAFKA-1641, KAFKA-1374) Check below mail thread for some corrective action

Re: Best Guide/link for Kafka Ops work

2016-04-21 Thread Manikumar Reddy
This book can help you: Kafka: The Definitive Guide ( http://shop.oreilly.com/product/0636920044123.do) On Thu, Apr 21, 2016 at 9:38 PM, Mudit Agarwal wrote: > Hi, > Any recommendations for any online guide/link on managing/Administration > of kafka cluster. > Thanks,Mudit

Re: kafk2.8.0-0.8.1.1 too many close_wait

2016-04-21 Thread Manikumar Reddy
We have fixed similar issues in 0.8.2.0 release. you should consider moving to latest releases. On Thu, Apr 21, 2016 at 1:11 PM, wanghai wrote: > > > > Hello > > When > kafka cluster runs a period of time, I find the cluster stunk. Consumers > can’t > read message from cluster. > >

Re: Compaction does not seem to kick in

2016-04-21 Thread Manikumar Reddy
Did you set broker config property log.cleanup.policy=compact or topic level property cleanup.policy=compact ? On Thu, Apr 21, 2016 at 7:16 PM, Kasim Doctor wrote: > Hi everyone, > > I have a cluster of 5 brokers with Kafka 2.10_0.8.2.1 and one of the > topics compacted (out of a total of 4 topi

Re: Metrics for Log Compaction

2016-04-15 Thread Manikumar Reddy
Hi, log compaction related JMX metric object names are given below. kafka.log:type=LogCleaner,name=cleaner-recopy-percent kafka.log:type=LogCleaner,name=max-buffer-utilization-percent kafka.log:type=LogCleaner,name=max-clean-time-secs kafka.log:type=LogCleanerManager,name=max-dirty-percent Afte

Re: Metrics for Log Compaction

2016-04-15 Thread Manikumar Reddy
Hi, kafka.log:type=LogCleaner,name=cleaner-recopy-percent kafka.log:type=LogCleanerManager,name=max-dirty-percent kafka.log:type=LogCleaner,name=max-clean-time-secs After every compaction cycle, we also print some useful statistics to logs/log-cleaner.log file. On Wed, Apr 13, 2016 at 7:16

Re: Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Manikumar Reddy
t; Oleg > > On Apr 12, 2016, at 9:22 AM, Manikumar Reddy > wrote: > > > > New consumer config property "max.poll.records" is getting introduced > in > > upcoming 0.10 release. > > This property can be used to control the no. of records in each

Re: Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Manikumar Reddy
New consumer config property "max.poll.records" is getting introduced in upcoming 0.10 release. This property can be used to control the no. of records in each poll. Manikumar On Tue, Apr 12, 2016 at 6:26 PM, Oleg Zhurakousky < ozhurakou...@hortonworks.com> wrote: > Is there a way to specify

Re: KafkaProducer Retries in .9.0.1

2016-04-05 Thread Manikumar Reddy
Hi, Producer message size validation checks ("buffer.memory", "max.request.size" ) happens before batching and sending messages. Retry mechanism is applicable for broker side errors and network errors. Try changing "message.max.bytes" broker config property for simulating broker side error.

Re: consumer too fast

2016-03-31 Thread Manikumar Reddy
Hi, 1. New config property "max.poll.records" is getting introduced in upcoming 0.10 release. This property can be used to control the no. of records in each poll. 2. We can use the combination of ExecutorService/Processing Thread and Pause/Resume API to handle unwanted rebalances. Some of

Re: Is it safe to send messages to Kafka when one of the brokers is down?

2016-03-28 Thread Manikumar Reddy
Hi, 1. Your topic partitions are not replicated (replication factor =1). Increase replication factor for better fault tolerance. With proper replication, Kafka Brokers/Producers can handle node failures without data loss. 2. Looks like Kafka brokers are not in a cluster. They might be

Re: Queue implementation

2016-03-28 Thread Manikumar Reddy
Yes. your scenarios are easy to implement using Kafka. Pl go through Kafka documentation and examples for better understanding of Kafka concepts, use cases and design. https://kafka.apache.org/documentation.html https://github.com/apache/kafka/tree/trunk/examples On Tue, Mar 29, 2016 at 9:20 AM,

Re: Custom serializer/deserializer for kafka 0.9.x version

2016-03-28 Thread Manikumar Reddy
Hi, You need to implement org.apache.kafka.common.serialization.Serializer, org.apache.kafka.common.serialization.Deserializer interfaces. Encoder, Decoder interfaces are for older clients. Example code: https://github.com/omkreddy/kafka-example s/tree/master/consumer/src/main/java/kafka/exampl

Re: Multiple Topics and Consumer Groups

2016-03-27 Thread Manikumar Reddy
A consumer can belong to only one consumer group. https://kafka.apache.org/documentation.html#intro_consumers On Mon, Mar 28, 2016 at 11:01 AM, Vinod Kakad wrote: > Hi, > > I wanted to know if same consumer can be in two consumer groups. > > OR > > How the multiple topic subscription for consume

Re: Offset after message deletion

2016-03-27 Thread Manikumar Reddy
It will continue from the latest offset. offset is a increasing, contiguous sequence number per partition. On Mon, Mar 28, 2016 at 9:11 AM, Imre Nagi wrote: > Hi All, > > I'm new in kafka. So, I have a question related to kafka offset. > > From the kafka documentation in here >

Re: Re: Topics in Kafka

2016-03-23 Thread Manikumar Reddy
ark Streaming afterwards? > > Thank you in advance. > > Regards, > Daniela > > > > Gesendet: Mittwoch, 23. März 2016 um 09:42 Uhr > Von: "Manikumar Reddy" > An: "users@kafka.apache.org" > Betreff: Re: Topics in Kafka > Hi, > > 1. Based

Re: Topics in Kafka

2016-03-23 Thread Manikumar Reddy
Hi, 1. Based on your design, it can be one or more topics. You can design one topic per region or one topic for all region devices. 2. Yes, you need to listen to web socket messages and write to kafka server using kafka producer. In your use case, you can also send messages using Kafka Re

Re: Reading data from sensors

2016-03-23 Thread Manikumar Reddy
Hi, you can use librdkafka C library for producing data. https://github.com/edenhill/librdkafka Manikumar On Wed, Mar 23, 2016 at 12:41 PM, Shashidhar Rao wrote: > Hi, > > Can someone help me with reading data from sensors and storing into Kafka. > > At the moment the sensors data are read b

Re: Reg : Unable to produce message

2016-03-19 Thread Manikumar Reddy
We may get few warning exceptions, on first produce to unknown topic , with default server config property auto.create.topics.enable = true. If this is the case, then it is harmless exception. On Sun, Mar 20, 2016 at 11:19 AM, Mohamed Ashiq wrote: > All, > > I am getting this error for few topic

Re: Larger Size Error Message

2016-03-19 Thread Manikumar Reddy
DumpLogSegments tool is used to dump partition data logs (not application logs). Usage: ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/TEST-TOPIC-0/.log Use --key-decoder-class , --key-decoder-class options to pass deserializers. On Fri, Mar 18,

Re: Larger Size Error Message

2016-03-19 Thread Manikumar Reddy
18, 2016 at 12:31 PM, Manikumar Reddy wrote: > DumpLogSegments tool is used to dump partition data logs (not application > logs). > > Usage: > ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files > /tmp/kafka-logs/TEST-TOPIC-0/.log > > Use --k

Re: Kafka 0.8.1.1 keeps full GC

2016-03-13 Thread Manikumar Reddy
Hi, These logs are minor GC logs and they look normal. Look for the word 'Full' for full gc log details. On Sun, Mar 13, 2016 at 3:06 PM, li jinyu wrote: > I'm using Kafka 0.8.1.1, have 10 nodes in a cluster, all are started with > default command: > ./bin/kafka-server-start.sh conf/server

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Manikumar Reddy
We need to pass "--new-consumer" property to kafka-consumer-groups.sh command to use new consumer. sh kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list --new-consumer On Thu, Mar 10, 2016 at 12:02 PM, Rajiv Kurian wrote: > Hi Guozhang, > > I tried using the kafka-consumer-gro

Re: Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread Manikumar Reddy
kafka scripts uses "kafka-run-class.sh" script to set environment variables and run classes. So if you set any environment variable in"kafka-run-class.sh" script, then it will be applicable to all the scripts. So try to set different JMX_PORT in kafka-topics.sh. On Mon, Feb 8, 2016 at 9:24 PM, Shi

Re: Detecting broker version programmatically

2016-02-04 Thread Manikumar Reddy
@James It is broker-id for Kafka server and client-id for java producer/consumer apps @Dana Yes, we can infer using custom logic.

Re: Detecting broker version programmatically

2016-02-04 Thread Manikumar Reddy
Currently it is available through JMX Mbean. It is not available on wire protocol/requests. Pending JIRAs related to this: https://issues.apache.org/jira/browse/KAFKA-2061 On Fri, Feb 5, 2016 at 4:31 AM, wrote: > Is there a way to detect the broker version (even at a high level 0.8 vs > 0.9) us

Re: Producer code to a partition

2016-02-03 Thread Manikumar Reddy
Feb 4, 2016 at 7:17 AM, Manikumar Reddy > > wrote: > > > Hi, > > > > You can use ProducerRecord(java.lang.String topic, java.lang.Integer > > partition, K key, V value) constructor > > to pass partition number. > > > > > > > > > ht

Re: Producer code to a partition

2016-02-03 Thread Manikumar Reddy
Hi, You can use ProducerRecord(java.lang.String topic, java.lang.Integer partition, K key, V value) constructor to pass partition number. https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html Kumar On Thu, Feb 4, 2016 at 11:41 AM, Joe San wrote: > Kafk

Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Manikumar Reddy
+1 (non-binding). verified the artifacts, quick start. On Wed, Sep 9, 2015 at 2:41 AM, Ashish wrote: > +1 (non-binding) > > Ran the build, works fine. All test cases passed > > On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao wrote: > > This is the first candidate for release of Apache Kafka 0.8.2.2. Th

Re: Query - Compression

2015-08-24 Thread Manikumar Reddy
Hi, If you are using producer's inbuilt compression (by setting compression.type property), then the consumer will auto decompress the data for you. Kumar On Mon, Aug 24, 2015 at 12:19 PM, ram kumar wrote: > Hi, > > If i compress the data in producer as snappy, > while consuming should i

Re: spark broadcast variable of Kafka producer throws ConcurrentModificationException

2015-08-18 Thread Manikumar Reddy
Hi, looks like the exception is occurring at kryo serialization. make sure you are not concurrently modifying java.util.Vector data structure. kumar On Wed, Aug 19, 2015 at 3:32 AM, Shenghua(Daniel) Wan wrote: > Hi, > Did anyone see java.util.ConcurrentModificationException when using > broa

Re: Zookeeper use cases with Kafka

2015-08-18 Thread Manikumar Reddy
Hi, 1. ZK is used for co-ordination between brokers, controller election, leader election, storing topic configuration etc. I think we use both sequential and ephemeral nodes. 2. Yes, Kafka uses ZK watches for controller changes, new topic creation, new partition creation, leader chan

Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-14 Thread Manikumar Reddy
+1 for 0.8.2.2 release On Fri, Aug 14, 2015 at 5:49 PM, Ismael Juma wrote: > I think this is a good idea as the change is minimal on our side and it has > been tested in production for some time by the reporter. > > Best, > Ismael > > On Fri, Aug 14, 2015 at 1:15 PM, Jun Rao wrote: > > > Hi, E

Re: logging for Kafka new producer

2015-08-10 Thread Manikumar Reddy
New producer uses SLF4J logging. We can configure any logging framework like log4j, java.util.logging and logback etc. On Tue, Aug 11, 2015 at 11:38 AM, Tao Feng wrote: > Hi, > > I am wondering what Kafka new producer uses for logging. Is it log4j? > > Thanks, > -Tao >

Re: Partition and consumer configuration

2015-08-10 Thread Manikumar Reddy
Hi, > 1. Will Kafka distribute the 100 serialized files randomly say 20 files go > to Partition 1, 25 to Partition 2 etc or do I have an option to configure > how many files go to which partition . > Assuming you are using new producer, All keyed messages will be distributed based on

Re: kafka benchmark tests

2015-07-14 Thread Manikumar Reddy
Yes, A list of Kafka Server host/port pairs to use for establishing the initial connection to the Kafka cluster https://kafka.apache.org/documentation.html#newproducerconfigs On Tue, Jul 14, 2015 at 7:29 PM, Yuheng Du wrote: > Does anyone know what is bootstrap.servers= > esv4-hcl198.grid.link

Re: How to run Kafka in background

2015-06-24 Thread Manikumar Reddy
You can pass "-daemon" config property to kafka startup script. ./kafka-server-start.sh -daemon ../config/server.1.properties On Wed, Jun 24, 2015 at 4:14 PM, bit1...@163.com wrote: > Hi, > > I am using kafak 0.8.2.1 , and when I startup Kafka with the script: > ./kafka-server-start.sh ../config

Re: Issue with log4j Kafka Appender.

2015-06-18 Thread Manikumar Reddy
You can enable producer debug log and verify. In 0.8.2.0, you can set compressionType , requiredNumAcks, syncSend producer config properties to log4j.xml. Trunk build can take additional retries property . Manikumar On Thu, Jun 18, 2015 at 1:14 AM, Madhavi Sreerangam < madhavi.sreeran...@gmai

Re: How to specify kafka bootstrap jvm options?

2015-06-17 Thread Manikumar Reddy
Most of the tuning options are available in kafka-run-class.sh. You can override required props (KAFKA_HEAP_OPTS , KAFKA_JVM_PERFORMANCE_OPTS) to kafka-server-start.sh script. On Wed, Jun 17, 2015 at 2:11 PM, luo.fucong wrote: > I want to tune the kafka jvm options, but nowhere can I pass the op

Re: Log compaction not working as expected

2015-06-16 Thread Manikumar Reddy
; is the last segment as opposed to the segment that would be written to if > something were received right now. > > On Tue, Jun 16, 2015 at 8:38 AM, Manikumar Reddy > wrote: > > > Hi, > > > > Your observation is correct. we never compact the active segment.

Re: Log compaction not working as expected

2015-06-16 Thread Manikumar Reddy
Hi, Your observation is correct. we never compact the active segment. Some improvements are proposed here, https://issues.apache.org/jira/browse/KAFKA-1981 Manikumar On Tue, Jun 16, 2015 at 5:35 PM, Shayne S wrote: > Some further information, and is this a bug? I'm using 0.8.2.1. > >

Re: cannot make another partition reassignment due to the previous partition reassignment failure

2015-06-15 Thread Manikumar Reddy
Hi, Jut delete the "/admin/reassign_partitions" zk node for zookeeper and try again. #sh zookeeper-shell.sh localhost:2181 delete /admin/reassign_partitions Manikumar On Tue, Jun 16, 2015 at 8:15 AM, Yu Yang wrote: > HI, > > We have a kafka 0.8.1.1 cluster. Recently I did a partition as

Re: Producer RecordMetaData with Offset -1

2015-06-12 Thread Manikumar Reddy
Hi, What is the value set for acks config property? If acks=0 then the producer will not wait for any acknowledgment from the server and offset given back for each record will always be set to -1. Manikumar On Fri, Jun 12, 2015 at 7:17 PM, Gokulakannan M (Engineering - Data Platform) wrote: >

Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread Manikumar Reddy
May 2015 at 11:06, Manikumar Reddy wrote: > > > If both C1,C2 belongs to same consumer group, then the re-balance will be > > triggered. > > A consumer subscribes to event changes of the consumer id registry within > > its group. > > > > On Mon, May 11, 2

Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread Manikumar Reddy
If both C1,C2 belongs to same consumer group, then the re-balance will be triggered. A consumer subscribes to event changes of the consumer id registry within its group. On Mon, May 11, 2015 at 10:55 AM, dinesh kumar wrote: > Hi, > I am looking at the code of kafka.consumer.ZookeeperConsumerConn

Re: New producer: metadata update problem on 2 Node cluster.

2015-04-28 Thread Manikumar Reddy
Hi Ewen, Thanks for the response. I agree with you, In some case we should use bootstrap servers. > > If you have logs at debug level, are you seeing this message in between the > connection attempts: > > Give up sending metadata request since no node is available > Yes, this log came for co

Re: New producer: metadata update problem on 2 Node cluster.

2015-04-27 Thread Manikumar Reddy
Any comments on this issue? On Apr 24, 2015 8:05 PM, "Manikumar Reddy" wrote: > We are testing new producer on a 2 node cluster. > Under some node failure scenarios, producer is not able > to update metadata. > > Steps to reproduce > 1. form a 2 node cluster (K1,

Re: New Java Producer: Single Producer vs multiple Producers

2015-04-24 Thread Manikumar Reddy
the fastest because batching dramatically reduces the number of > requests (esp using the new java producer). > -Jay > > On Fri, Apr 24, 2015 at 4:54 AM, Manikumar Reddy < > manikumar.re...@gmail.com> > wrote: > > > We have a 2 node cluster with 100 topics. > >

New producer: metadata update problem on 2 Node cluster.

2015-04-24 Thread Manikumar Reddy
We are testing new producer on a 2 node cluster. Under some node failure scenarios, producer is not able to update metadata. Steps to reproduce 1. form a 2 node cluster (K1, K2) 2. create a topic with single partition, replication factor = 2 3. start producing data (producer metadata : K1,K2) 2. K

New Java Producer: Single Producer vs multiple Producers

2015-04-24 Thread Manikumar Reddy
We have a 2 node cluster with 100 topics. should we use a single producer for all topics or create multiple producers? What is the best choice w.r.t network load/failures, node failures, latency, locks? Regards, Manikumar

Re: Does Kafka 0.8.2 producer has a lower throughput in sync-mode, comparing with 0.8.1.x?

2015-03-09 Thread Manikumar Reddy
, Manikumar Reddy wrote: > 1 . > > > > On Mon, Mar 9, 2015 at 1:03 PM, Yu Yang wrote: > >> The confluent blog >> <http://blog.confluent.io/2014/12/02/whats-coming-in-apache-kafka-0-8-2/> >> mentions >> that the the batching is done whenever possible

Re: Does Kafka 0.8.2 producer has a lower throughput in sync-mode, comparing with 0.8.1.x?

2015-03-09 Thread Manikumar Reddy
1 . On Mon, Mar 9, 2015 at 1:03 PM, Yu Yang wrote: > The confluent blog > > mentions > that the the batching is done whenever possible now. "The sync producer, > under load, can get performance as good as the async produ

Re: Broker shuts down due to unrecoverable I/O error

2015-03-03 Thread Manikumar Reddy
Hi, We are running on RedHat Linux with SAN storage. This happened only once. Thanks, Manikumar. On Tue, Mar 3, 2015 at 10:02 PM, Jun Rao wrote: > Which OS is this on? Is this easily reproducible? > > Thanks, > > Jun > > On Sun, Mar 1, 2015 at 8:24 PM, Manikumar Reddy

Broker shuts down due to unrecoverable I/O error

2015-03-01 Thread Manikumar Reddy
Kafka 0.8.2 server got stopped after getting below I/O exception. Any thoughts on below exception? Can it be file system related? [2015-03-01 14:36:27,627] FATAL [KafkaApi-0] Halting due to unrecoverable I/O error while handling produce request: (kafka.serv er.KafkaApis) kafka.common.KafkaStorage

Re: How to measure performance metrics

2015-02-24 Thread Manikumar Reddy
Hi, There are bunch of metrics available for performance monitoring. These metrics are can be monitored by JMX monitoring tool (Jconsole). https://kafka.apache.org/documentation.html#monitoring. Some of the available metrics reporters are: https://cwiki.apache.org/confluence/display/KAFKA/JM

Re: Custom partitioner in kafka-0.8.2.0

2015-02-19 Thread Manikumar Reddy
Hi, In new producer, we can specify the partition number as part of ProducerRecord. >From javadocs : *"If a valid partition number is specified that partition will be used when sending the record. If no partition is specified but a key is present a partition will be chosen using a hash of the key

Re: KafkaConsumer Class Usage in Kafka 0.8.2 Beta

2015-02-12 Thread Manikumar Reddy
New KafkaConsumer is not yet released. It is planned for 0.9.0 release. On 2/13/15, Jayesh Thakrar wrote: > Hi, > I am trying to write a consumer using the KafkaConsumer class > from > https://github.com/apache/kafka/blob/0.8.2/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsum

Re: Does kafka use sbt (or) gradle

2015-02-12 Thread Manikumar Reddy
Kafka migrated to gradle. Pl follow the README instructions. On 2/12/15, madhavan kumar wrote: > dear all, > i am new to kafka. And when i try to set up kafka source code on my > lappie, github's readme points to gradle whereas kafka Quick start > Documentation talks about scala build tool sbt

Re: regarding custom msg

2015-02-09 Thread Manikumar Reddy
Can you post the exception stack-trace? On Mon, Feb 9, 2015 at 2:58 PM, Gaurav Agarwal wrote: > hello > We are sending custom message across producer and consumer. But > getting class cast exception . This is working fine with String > message and string encoder. > But this did not work with cus

Re: one message consumed by both consumers in the same group?

2015-02-08 Thread Manikumar Reddy
Hi, bin/kafka-console-consumer.sh --. > > all the parameters are the same > You need to set same group.id to create a consumer group. By default console consumer creates a random group.id. You can set group.id by using " --consumer.config /tmp/comsumer.props" flag. $$>echo "group.id=1" >

Re: Not found NewShinyProducer sync performance metrics

2015-02-08 Thread Manikumar Reddy
er. > > > > Otis > > -- > > Monitoring * Alerting * Anomaly Detection * Centralized Log Management > > Solr & Elasticsearch Support * http://sematext.com/ > > > > > > On Thu, Feb 5, 2015 at 5:58 AM, Manikumar Reddy > > wrote: > > >

Re: Not found NewShinyProducer sync performance metrics

2015-02-05 Thread Manikumar Reddy
New Producer uses Kafka's own metrics api. Currently metrics are reported using jmx. Any jmx monitoring tool (jconsole) can be used for monitoring. On Feb 5, 2015 3:56 PM, "Xinyi Su" wrote: > Hi, > I am using kafka-producer-perf-test.sh to study NewShinyProducer *sync* > performance. > > I have n

Re: Potential socket leak in kafka sync producer

2015-01-29 Thread Manikumar Reddy
Hope you are closing the producers. can you share the attachment through gist/patebin On Fri, Jan 30, 2015 at 11:11 AM, ankit tyagi wrote: > Hi Jaikiran, > > I am using ubuntu and was able to reproduce on redhat too. Please find the > more information below. > > > *DISTRIB_ID=Ubuntu* > *DISTRIB_

Re: Missing Per-Topic BrokerTopicMetrics in v0.8.2.0

2015-01-27 Thread Manikumar Reddy
firm that the per topic metrics are not coming through to the > > yammer metrics registry. I do see them in jmx (via jconsole), but the > > MetricsRegistry does not have them. > > All the other metrics are coming through that appear in jmx. > > > > This is with sin

Re: Missing Per-Topic BrokerTopicMetrics in v0.8.2.0

2015-01-26 Thread Manikumar Reddy
If you are using multi-node cluster, then metrics may be reported from other servers. pl check all the servers in the cluster. On Tue, Jan 27, 2015 at 4:12 AM, Kyle Banker wrote: > I've been using a custom KafkaMetricsReporter to report Kafka broker > metrics to Graphite. In v0.8.1.1, Kafka was

Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 2 (with the correct links)

2015-01-26 Thread Manikumar Reddy
+1 (Non-binding) Verified source package, unit tests, release build, topic deletion, compaction and random testing On Mon, Jan 26, 2015 at 6:14 AM, Neha Narkhede wrote: > +1 (binding) > Verified keys, quick start, unit tests. > > On Sat, Jan 24, 2015 at 4:26 PM, Joe Stein wrote: > > > That make

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
Ok, got it. Link is different from Release Candidate 1. On Wed, Jan 21, 2015 at 10:01 PM, Jun Rao wrote: > Is it? You just need to navigate into org, then apache, then kafka, etc. > > Thanks, > > Jun > > On Wed, Jan 21, 2015 at 8:28 AM, Manikumar Reddy > wrote: > &

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
Also Maven artifacts link is not correct On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao wrote: > Yes, will send out a new email with the correct links. > > Thanks, > > Jun > > On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy > wrote: > >> All links are pointin

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
All links are pointing to https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/. They should be https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/ right? On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao wrote: > This is the second candidate for release of Apache Kafka 0.8.2.0. There > ha

Re: dumping JMX data

2015-01-17 Thread Manikumar Reddy
JIRAs related to the issue are https://issues.apache.org/jira/browse/KAFKA-1680 https://issues.apache.org/jira/browse/KAFKA-1679 On Sun, Jan 18, 2015 at 3:12 AM, Scott Chapman wrote: > While I appreciate all the suggestions on other JMX related tools, my > question is really about the JMXTool i

Re: Consumer questions

2015-01-17 Thread Manikumar Reddy
eplay > of the stream. The example is: > >KafkaStream.iterator(); > > which starts at wherever zookeeper recorded as where you left off. > > With the high level interface, can you request an iterator that starts at > the very beginning? > > > > On Fri,

Re: Question on running Kafka Producer in Java environment

2015-01-16 Thread Manikumar Reddy
Pl check your classpath. Some jars might be missing. On Sat, Jan 17, 2015 at 7:41 AM, Su She wrote: > Hello Everyone, > > Thank you for the time and help. I had the Kafka Producer running, but am > having some trouble now. > > 1) Using Maven, I wrote a Kafka Producer similar to the one found her

Re: Consumer questions

2015-01-16 Thread Manikumar Reddy
Hi, 1. In SimpleConsumer, you must keep track of the offsets in your application. In the example code, "readOffset" variable can be saved in redis/zookeeper. You should plugin this logic in your code. High Level Consumer stores the last read offset information in ZooKeeper. 2. You wil

Re: [VOTE] 0.8.2.0 Candidate 1

2015-01-15 Thread Manikumar Reddy
Also can we remove "delete.topic.enable" config property and enable topic deletion by default? On Jan 15, 2015 10:07 PM, "Jun Rao" wrote: > Thanks for reporting this. I will remove that option in RC2. > > Jun > > On Thu, Jan 15, 2015 at 5:21 AM, Jaikiran Pai > wrote: > > > I just downloaded the

Re: Configuring location for server (log4j) logs

2015-01-14 Thread Manikumar Reddy
you just need to set LOG_DIR property . All logs will be redirected to LOG_DIR directory. On Thu, Jan 15, 2015 at 11:49 AM, Shannon Lloyd wrote: > By default Kafka writes its server logs into a "logs" directory underneath > the installation root. I'm trying to override this to get it to write lo

Re: Delete topic

2015-01-14 Thread Manikumar Reddy
I think now we should delete this config property and allow topic deletion in 0.8.2 Yep, you need to set delete.topic.enable=true. Forgot that step :) 2015-01-14 10:16 GMT-08:00 Jayesh Thakrar : > Does one also need to set the config parameter "delete.topic.enable" to true ?I am using 8.2 beta a

  1   2   >