Re: Mirrormaker between 0.8.2.1 cluster and 0.10 cluster

2016-07-29 Thread Gwen Shapira
You need to use the old mirrormaker (0.8.2.1) to mirror 0.8.2.1 to 0.10.0.0. This is true in general - always use MirrorMaker from the older release. Because new Kafka can talk to old clients and not the other way around. Gwen On Fri, Jul 29, 2016 at 12:04 AM, Yifan Ying

Re: Chocolatey packages for ZooKeeper, Kafka?

2016-07-29 Thread Gwen Shapira
If anyone packages Kafka with Chocolatey, we'll be happy to add this to our ecosystem page. Currently Apache Kafka only publishes tarballs. Gwen On Thu, Jul 28, 2016 at 6:58 PM, Andrew Pennebaker wrote: > Could we please publish Chocolatey packages for ZooKeeper

Re: Kafka 0.9.0.1 failing on new leader election

2016-07-29 Thread Gwen Shapira
you know, I ran into those null pointer exceptions when I accidentally tested Kafka with mismatching version of zkclient. Can you share the versions of both? And make sure you have only one zkclient on your classpath? On Tue, Jul 26, 2016 at 6:40 AM, Sean Morris (semorris)

Re: Too Many Open Files

2016-07-29 Thread Gwen Shapira
woah, it looks like you have 15,000 replicas per broker? You can go into the directory you configured for kafka's log.dir and see how many files you have there. Depending on your segment size and retention policy, you could have hundreds of files per partition there... Make sure you have at

Re: Verify log compaction

2016-07-29 Thread John Holland
Check the log-cleaner.log file on the server. When the thread runs you'll see output for every partition it compacts and the compaction ratio it achieved. The __consumer_offsets topic is compacted, I see log output from it being compacted frequently. Depending on your settings for the topic it

Verify log compaction

2016-07-29 Thread David Yu
Hi, We are using Kafka 0.9.0.0. One of our topic is set to use log compaction. We have also set log.cleaner.enable. However, we suspected that the topic is not being compacted. What is the best way for us to verify the compaction is happening? Thanks, David

Too Many Open Files

2016-07-29 Thread Kessiler Rodrigues
Hi guys, I have been experiencing some issues on kafka, where its throwing too many open files. I have around of 6k topics and 5 partitions each. My cluster was made with 6 brokers. All of them are running Ubuntu 16 and the file limits settings are: `cat /proc/sys/fs/file-max` 200

Re: compression ratio

2016-07-29 Thread Ian Wrigley
I believe so, yes. --- Ian Wrigley Director, Education Services Confluent, Inc > On Jul 29, 2016, at 7:51 PM, Tauzell, Dave > wrote: > > A compression ratio of .5 means we are getting about 2x compression? > > Dave Tauzell | Senior Software Engineer |

RE: compression ratio

2016-07-29 Thread Tauzell, Dave
A compression ratio of .5 means we are getting about 2x compression? Dave Tauzell | Senior Software Engineer | Surescripts O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com Connect with us: Twitter I LinkedIn I Facebook I YouTube -Original Message- From: Tauzell,

Re: Same partition number of different Kafka topcs

2016-07-29 Thread Jack Huang
Hi Gerard, After further digging, I found that the clients we are using also have different partitioner. The Python one uses murmur2 ( https://github.com/dpkp/kafka-python/blob/master/kafka/partitioner/default.py), and the NodeJS one uses its own impl (

Re: Kafka 0.9.0.1 failing on new leader election

2016-07-29 Thread Sean Morris (semorris)
Yes. This is happening after several days of running data, not on initial startup. Thanks. On 7/29/16, 11:54 AM, "David Garcia" wrote: >Well, just a dumb question, but did you include all the brokers in your client >connection properties? > >On 7/29/16, 10:48 AM,

Re: Questions about Kafka Streams Partitioning & Deployment

2016-07-29 Thread Michael Noll
Michael, > Guozhang, in (2) above did you mean "some keys *may be* hashed to different > partitions and the existing local state stores will not be valid?" > That fits with out understanding. Yes, that's what Guozhang meant. Corrected version: When you increase the number of input

Re: [kafka-clients] [VOTE] 0.10.0.1 RC0

2016-07-29 Thread Dana Powers
+1 tested against kafka-python integration test suite = pass. Aside: as the scope of kafka gets bigger, it may be useful to organize release notes into functional groups like core, brokers, clients, kafka-streams, etc. I've found this useful when organizing kafka-python release notes. -Dana On

Re: Kafka 0.9.0.1 failing on new leader election

2016-07-29 Thread David Garcia
Well, just a dumb question, but did you include all the brokers in your client connection properties? On 7/29/16, 10:48 AM, "Sean Morris (semorris)" wrote: Anyone have any ideas? From: semorris > Date: Tuesday,

Re: Kafka 0.9.0.1 failing on new leader election

2016-07-29 Thread Sean Morris (semorris)
Anyone have any ideas? From: semorris > Date: Tuesday, July 26, 2016 at 9:40 AM To: "users@kafka.apache.org" > Subject: Kafka 0.9.0.1 failing on new leader election

RE: Kafka streams Issue

2016-07-29 Thread Hamza HACHANI
Thanks i will try that. Hamza De : Tauzell, Dave Envoyé : vendredi 29 juillet 2016 03:18:47 À : users@kafka.apache.org Objet : RE: Kafka streams Issue Let's say you currently have: Procesing App---> OUTPUT TOPIC ---> output

RE: Kafka streams Issue

2016-07-29 Thread Tauzell, Dave
Let's say you currently have: Procesing App---> OUTPUT TOPIC ---> output consumer You would ideally like the processing app to only write to the output topic every minute, but cannot easily do this. So what you might be able to do is: Processing App ---> INTERMIDIATE OUTPUT TOPIC --->

Re: Chocolatey packages for ZooKeeper, Kafka?

2016-07-29 Thread Patrick Hunt
Hi Andrew, if you want to publish somelike like that for ZK (on github say) we'd be happy to link to it on our wiki "useful tools" page. Regards, Patrick On Thu, Jul 28, 2016 at 6:58 PM, Andrew Pennebaker < andrew.penneba...@gmail.com> wrote: > Could we please publish Chocolatey packages for

RE: compression ratio

2016-07-29 Thread Tauzell, Dave
Thanks. That's just tracked on the client, right? -Dave Dave Tauzell | Senior Software Engineer | Surescripts O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com Connect with us: Twitter I LinkedIn I Facebook I YouTube -Original Message- From: Ian Wrigley

Re: [VOTE] 0.10.0.1 RC0

2016-07-29 Thread Harsha Ch
Hi Ismael, I would like to this JIRA included in the minor release https://issues.apache.org/jira/browse/KAFKA-3950. Thanks, Harsha On Fri, Jul 29, 2016 at 7:46 AM Ismael Juma wrote: > Hello Kafka users, developers and client-developers, > > This is the first

Re: Same partition number of different Kafka topcs

2016-07-29 Thread Gerard Klijs
The default partitioner will take the key, make the hash from it, and do a modulo operation to determine the partition it goes to. Some things which might cause it to and up different for different topics: - partition number are not the same (you already checked) - key is not exactly the same, for

RE: Kafka streams Issue

2016-07-29 Thread Tauzell, Dave
You could send the message immediately to an intermediary topic. Then have a consumer of that topic that pull messages off and waits until the minute is up. -Dave Dave Tauzell | Senior Software Engineer | Surescripts O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com

RE: SSD or not for Kafka brokers?

2016-07-29 Thread Tauzell, Dave
In addition for sequential writes, which is common with kafka, SSD isn't much faster than HDD. Dave Tauzell | Senior Software Engineer | Surescripts O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com Connect with us: Twitter I LinkedIn I Facebook I YouTube -Original

Kafka streams Issue

2016-07-29 Thread Hamza HACHANI
> Good morning, > > I'm an ICT student in TELECOM BRRETAGNE (a french school). > I did follow your presentation in Youtube and i found them really > intresting. > I'm trying to do some stuffs with Kafka. And now it has been about 3 days > that I'm blocked. > I'm trying to control the time in

Re: compression ratio

2016-07-29 Thread Ian Wrigley
Hi Dave The JMX metric compression-rate-avg should give you that info. Regards Ian. --- Ian Wrigley Director, Education Services Confluent, Inc > On Jul 29, 2016, at 2:58 PM, Tauzell, Dave > wrote: > > Is there a good way to see what sort of compression ratio

compression ratio

2016-07-29 Thread Tauzell, Dave
Is there a good way to see what sort of compression ratio is being achieved? -Dave Dave Tauzell | Senior Software Engineer | Surescripts O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com Connect with us:

Re: SSD or not for Kafka brokers?

2016-07-29 Thread Gerard Klijs
As I under stood it won't really has any advantage over using HDD since most things will work from the working memory anyway. You might want to use SSD for zookeeper through. On Fri, Jul 29, 2016 at 12:19 AM Kessiler Rodrigues wrote: > Hi guys, > > Should I use SSD for

Re: Jars in Kafka 0.10

2016-07-29 Thread Gerard Klijs
No, if you don't use streams you don't need them. If you have no clients (so also no mirror maker) running on the same machine you also don't need the client jar, if you run zookeeper separately you also don't need those. On Fri, Jul 29, 2016 at 4:22 PM Bhuvaneswaran Gopalasami <

Jars in Kafka 0.10

2016-07-29 Thread Bhuvaneswaran Gopalasami
I have recently started looking into Kafka I noticed the number of Jars in Kafka 0.10 has increased when compared to 0.8.2. Do we really need all those libraries to run Kafka ? Thanks, Bhuvanes

AUTO: Yan Wang is out of the office (returning 08/08/2016)

2016-07-29 Thread Yan Wang
I am out of the office until 08/08/2016. Note: This is an automated response to your message "Mirrormaker between 0.8.2.1 cluster and 0.10 cluster" sent on 7/29/2016 2:04:44 AM. This is the only notification you will receive while this person is away. ** This email and any attachments may

Mirrormaker between 0.8.2.1 cluster and 0.10 cluster

2016-07-29 Thread Yifan Ying
Hi all, I am trying to use the mirrormaker on the 0.10 cluster to mirror the 0.8.2.1 cluster into 0.10 cluster. Then I got a bunch of consumer errors as follows: Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@f9533ee (kafka.consumer.ConsumerFetcherThread)