[jira] [Created] (KAFKA-7046) Support new Admin API for single topic
darion yaphet created KAFKA-7046: Summary: Support new Admin API for single topic Key: KAFKA-7046 URL: https://issues.apache.org/jira/browse/KAFKA-7046 Project: Kafka Issue Type: New Feature Components: admin Affects Versions: 1.1.0 Reporter: darion yaphet When I create topic delete and describe topic with AdminClient often use just one topic . Currently I must warp it into a collection . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7038) Support AdminClient Example
darion yaphet created KAFKA-7038: Summary: Support AdminClient Example Key: KAFKA-7038 URL: https://issues.apache.org/jira/browse/KAFKA-7038 Project: Kafka Issue Type: New Feature Components: admin Reporter: darion yaphet -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7034) Remove the duplicated listTopics from Consumer
darion yaphet created KAFKA-7034: Summary: Remove the duplicated listTopics from Consumer Key: KAFKA-7034 URL: https://issues.apache.org/jira/browse/KAFKA-7034 Project: Kafka Issue Type: Improvement Components: clients Reporter: darion yaphet Both AdminClient and Consumer are include the listTopics method , and they are also use the Cluster instance to get the topic name . They are very similar . So I think we should remove the Consumer's listTopics method . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7033) Modify AbstractOptions's timeoutMs as Long type
darion yaphet created KAFKA-7033: Summary: Modify AbstractOptions's timeoutMs as Long type Key: KAFKA-7033 URL: https://issues.apache.org/jira/browse/KAFKA-7033 Project: Kafka Issue Type: Improvement Components: clients Affects Versions: 1.1.0 Reporter: darion yaphet Currently AbstractOptions's timeoutMs is Integer and using Long to represent timeout Millisecond maybe better . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: The purpose of ProducerRecord Headers
thanks ~ Matthias J. Sax 于2018年6月10日周日 下午3:38写道: > Check out the KIP that added headers: > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers > > -Matthias > > On 6/8/18 8:23 PM, 逐风者的祝福 wrote: > > Hi team: > > When I reading KafkaProducer doSend() and I found each record have a > Header[] headers . > > But I don't know why we should use the headers ? > > Someone could help me ? > > thanks a lot ~ > > > > -- long is the way and hard that out of Hell leads up to light
[jira] [Created] (KAFKA-6954) Add a new tool to loading data from file
darion yaphet created KAFKA-6954: Summary: Add a new tool to loading data from file Key: KAFKA-6954 URL: https://issues.apache.org/jira/browse/KAFKA-6954 Project: Kafka Issue Type: New Feature Components: tools Affects Versions: 1.1.0 Reporter: darion yaphet Sometimes we will append data from a file or files , I write a small tool to loading data from file and write to Kafka. I think this is very useful and could make as a tool . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6930) Update KafkaZkClient debug log
darion yaphet created KAFKA-6930: Summary: Update KafkaZkClient debug log Key: KAFKA-6930 URL: https://issues.apache.org/jira/browse/KAFKA-6930 Project: Kafka Issue Type: Improvement Components: core, zkclient Affects Versions: 1.1.0 Reporter: darion yaphet Currently , KafkaZkClient could print data: Array[Byte] in debug log , we should print data as String . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6908) Update LogDirsCommand's prompt information
[ https://issues.apache.org/jira/browse/KAFKA-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] darion yaphet resolved KAFKA-6908. -- Resolution: Won't Fix > Update LogDirsCommand's prompt information > -- > > Key: KAFKA-6908 > URL: https://issues.apache.org/jira/browse/KAFKA-6908 > Project: Kafka > Issue Type: Improvement > Components: admin, tools >Affects Versions: 1.1.0 > Reporter: darion yaphet >Priority: Minor > > LogDirsCommand command line argument broker List and topic List are marked > with RequiredArg . > So we should append REQUIRED in the command info . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6908) Update LogDirsCommand's prompt information
darion yaphet created KAFKA-6908: Summary: Update LogDirsCommand's prompt information Key: KAFKA-6908 URL: https://issues.apache.org/jira/browse/KAFKA-6908 Project: Kafka Issue Type: Improvement Components: admin, tools Affects Versions: 1.1.0 Reporter: darion yaphet LogDirsCommand command line argument broker List and topic List are marked with RequiredArg . So we should append REQUIRED in the command info . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Number of kafka topics/partitions supported per cluster of n nodes
Kafka store it meta data in Zookeeper Cluster so evaluate how many total number of topics and partitions can be created in a cluster maybe same as to test Zookeeper's expansibility and disk IO performance . 2015-07-28 13:51 GMT+08:00 Prabhjot Bharaj prabhbha...@gmail.com: Hi, I'm looking forward to a benchmark which can explain how many total number of topics and partitions can be created in a cluster of n nodes, given the message size varies between x and y bytes and how does it vary with varying heap sizes and how it affects the system performance. e.g. the result should look like: t topics with p partitions each can be supported in a cluster of n nodes with a heap size of h MB, before the cluster sees things like JVM crashes or high mem usage or system slowdown etc. I think such benchmarks must exist so that we can make better decisions on ops side If these details dont exist, I'll be doing this test myself on varying the values of parameters described above. I would be happy to share the numbers with the community Thanks, prabcs -- long is the way and hard that out of Hell leads up to light
Re: [VOTE] 0.8.2-beta Release Candidate 1
+1 New Sender is Added ~ 2014-10-25 1:18 GMT+08:00 Neha Narkhede neha.narkh...@gmail.com: +1 (binding) Verified the quickstart, docs, unit tests on the source and binary release. Thanks, Neha On Fri, Oct 24, 2014 at 9:26 AM, Gwen Shapira gshap...@cloudera.com wrote: +1 (non-official community vote). Kicked the tires of the binary release. Works out of the box as expected, new producer included. On Fri, Oct 24, 2014 at 5:22 AM, Joe Stein joe.st...@stealth.ly wrote: Jun, I updated the https://people.apache.org/~joestein/kafka-0.8.2-beta-candidate1/java-doc/ with the contents of kafka-clients-0.8.2-beta-javadoc.jar There weren't any artifact changes so I don't think we need a new release candidate ... we can extend the vote to Monday if we don't get a pass/fail by 2pm PT today. /*** Joe Stein Founder, Principal Consultant Big Data Open Source Security LLC http://www.stealth.ly Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop / On Fri, Oct 24, 2014 at 12:04 AM, Jun Rao jun...@gmail.com wrote: Joe, Verified quickstart on both the src and binary release. They all look good. The javadoc doesn't seem to include those in clients. Could you add them? Thanks, Jun On Tue, Oct 21, 2014 at 1:58 PM, Joe Stein joe.st...@stealth.ly wrote: This is the first candidate for release of Apache Kafka 0.8.2-beta Release Notes for the 0.8.2-beta release https://people.apache.org/~joestein/kafka-0.8.2-beta-candidate1/RELEASE_NOTES.html *** Please download, test and vote by Friday, October 24th, 2pm PT Kafka's KEYS file containing PGP keys we use to sign the release: https://svn.apache.org/repos/asf/kafka/KEYS in addition to the md5, sha1 and sha2 (SHA256) checksum. * Release artifacts to be voted upon (source and binary): https://people.apache.org/~joestein/kafka-0.8.2-beta-candidate1/ * Maven artifacts to be voted upon prior to release: https://repository.apache.org/content/groups/staging/ * scala-doc https://people.apache.org/~joestein/kafka-0.8.2-beta-candidate1/scala-doc/ * java-doc https://people.apache.org/~joestein/kafka-0.8.2-beta-candidate1/java-doc/ * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2-beta tag https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2b2c3da2c52bc62a89d60f85125d3723c8410fa0 /*** Joe Stein Founder, Principal Consultant Big Data Open Source Security LLC http://www.stealth.ly Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop / -- long is the way and hard that out of Hell leads up to light
Re: [jira] [Updated] (KAFKA-1620) Make kafka api protocol implementation public
I'm curiosity why Kafka don't implementation protocol by protocol buffer or any other tools . It's good to use by other language 2014-09-01 22:48 GMT+08:00 Anton Karamanov (JIRA) j...@apache.org: [ https://issues.apache.org/jira/browse/KAFKA-1620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Karamanov updated KAFKA-1620: --- Reviewer: Jun Rao Assignee: Anton Karamanov Status: Patch Available (was: Open) Make kafka api protocol implementation public - Key: KAFKA-1620 URL: https://issues.apache.org/jira/browse/KAFKA-1620 Project: Kafka Issue Type: Improvement Reporter: Anton Karamanov Assignee: Anton Karamanov Attachments: 0001-KAFKA-1620-Make-kafka-api-protocol-implementation-pu.patch Some of the classes which implement Kafka api protocol, such as {{RequestOrResponse}} and {{FetchRequest}} are defined as private to {{kafka}} package. Those classes would be extremely usefull for writing custom clients (we're using Scala with Akka and implementing one directly on top of Akka TCP), and don't seem to contain any actuall internal logic of Kafka. Therefore it seems like a nice idea to make them public. -- This message was sent by Atlassian JIRA (v6.3.4#6332) -- long is the way and hard that out of Hell leads up to light
Re: Monitoring Producers at Large Scale
Sorry I want to know you want to monitor kafka producers or kafka brokers and zookeepers ? It's seems you will want to monitor monitor Exceptions eg Leader Not Found, Queue is full, resend fail etc are kafka cluster 2014-06-25 8:20 GMT+08:00 Bhavesh Mistry mistry.p.bhav...@gmail.com: We use Kafka as Transport Layer to transport application logs. How do we monitor Producers at large scales about 6000 boxes x 4 topic per box so roughly 24000 producers (spread across multiple data center.. we have brokers per DC). We do the monitoring based on logs. I have tried intercepting logs via Log4J custom implementation which only intercept WARN and ERROR and FATAL events org.apache.log4j.AppenderSkeleton append method which send its logs to brokers (This is working but after load testing it is causing deadlock some times between ProducerSendThread and Producer). I know there are JMX monitoring MBeans available which we can pull the data, but I would like to monitor Exceptions eg Leader Not Found, Queue is full, resend fail etc in Kafka Library. How does LinkedIn monitor the Producers ? Thanks, Bhavesh -- long is the way and hard that out of Hell leads up to light
Re: Max Message Size
max.message.bytes This is largest message size Kafka will allow to be appended to this topic. Note that if you increase this size you must also increase your consumer's fetch size so they can fetch messages this large. When the message length is larger than max,message.bytes maybe throw an exception .. I think ~ 2014-05-14 2:23 GMT+08:00 Bhavesh Mistry mistry.p.bhav...@gmail.com: Hi Kafka Team, Is there any message size limitation from producer side ? If there, is what happens to message, does it get truncated or message is lost ? Thanks, Bhavesh -- long is the way and hard that out of Hell leads up to light