Re: hi

2016-09-15 Thread kant kodali
They are hosted on AWS and I dont think there are any network issues because I tried testing other Queuing systems with no issues however I am using a node.js client with the following code. I am not sure if there are any errors or anything I didn't set in the following code? //

Re: hi

2016-09-15 Thread Ali Akhtar
Your code seems to be using the public ip of the servers. If all 3 machines are in the same availability zone on AWS, try using the private ip, and then they might communicate over the local network. Did you change any default settings? Do you get the same results if you run kafka-consumer.sh and

Re: hi

2016-09-15 Thread kant kodali
172.* is all private ip's for my machine I double checked it.I have not changed any default settingsI dont know how to use  kafka-consumer.sh or kafka-producer.sh because it looks like they want me to specify a group and I didn't create any  consumer group because I am using single producer and con

Re: hi

2016-09-15 Thread kant kodali
My goal is to test the throughput (#messages per second) given my setup and with a data size of 1KB. if you guys already have some idea on these numbers that would be helpful as well. On Thu, Sep 15, 2016 12:24 AM, kant kodali kanth...@gmail.com wrote: 172.* is all private ip's for my machin

Re: hi

2016-09-15 Thread Ali Akhtar
What's the instance size that you're using? With 300k messages your single broker might not be able to handle it. On Thu, Sep 15, 2016 at 12:30 PM, kant kodali wrote: > My goal is to test the throughput (#messages per second) given my setup and > with a data size of 1KB. if you guys already have

Re: hi

2016-09-15 Thread kant kodali
m4.xlarge On Thu, Sep 15, 2016 12:33 AM, Ali Akhtar ali.rac...@gmail.com wrote: What's the instance size that you're using? With 300k messages your single broker might not be able to handle it. On Thu, Sep 15, 2016 at 12:30 PM, kant kodali wrote: My goal is to test the throughput

Re: hi

2016-09-15 Thread Ali Akhtar
Lower the workload gradually, start from 10 messages, increase to 100, then 1000, and so on. See if it slows down as the workload increases. If so, you need more brokers + partitions to handle the workload. On Thu, Sep 15, 2016 at 12:42 PM, kant kodali wrote: > m4.xlarge > > > > > > > On Thu, Se

v0.10 MirrorMaker producer cannot send v0.8 message from v0.10 broker

2016-09-15 Thread Samuel Zhou
Hi, I have a pipeline that publish message with v0.8 client, the message goes to v0.10 broker first then mirror maker will consume it and publish it to another v0.10 brokers. But I got the following message from MM log: java.lang.IllegalArgumentException: Invalid timestamp -1 at org.apac

Re: hi

2016-09-15 Thread kant kodali
yeah.. I tried it with 10 messages with single broker and only one partiton that looked instantaneous and ~5K messages/sec for the data size of 1KBI tried it with 1000 messages that looked instantaneous as well ~5K messages/sec for the data size of 1KBI tried it with 10K messages with single broker

Publish to 1 topic, consume from N

2016-09-15 Thread Luiz Cordeiro
Hello, We’re considering migrating an AMQ-based platform to Kafka. However our application logic needs an AMQ feature called Dynamic Binding, that is, on AMQ one publishes messages to an Exchange, which can be dynamically configured to deliver a copy of the message to several queues, based on b

Re: hi

2016-09-15 Thread Ali Akhtar
The issue is clearly that you're running out of resources, so I would add more brokers and/or larger instances. You're also using Node which is not the best for performance. A compiled language such as Java would give you the best performance. Here's a case study that should help: https://enginee

Re: hi

2016-09-15 Thread kant kodali
I used node.js client libraries for all three and yes I want to make sure I am comparing apples to apples so I make it as equivalent as possible. Again the big question is What is the right setup for Kafka to be comparable with the other I mentioned in my previous email? On Thu, Sep 15, 2016

Re: hi

2016-09-15 Thread Ali Akhtar
I'd post to the mailing list again with a new subject and ask that. On Thu, Sep 15, 2016 at 1:52 PM, kant kodali wrote: > I used node.js client libraries for all three and yes I want to make sure > I am > comparing apples to apples so I make it as equivalent as possible. > Again the big question

What is the fair setup of Kafka to be comparable with NATS or NSQ?

2016-09-15 Thread kant kodali
with Kafka I tried it with 10 messages with single broker and only one partiton that looked instantaneous and ~5K messages/sec for the data size of 1KB I tried it with 1000 messages that looked instantaneous as well ~5K messages/sec for the data size of 1KBI tried it with 10K messages with single b

Re: Publish to 1 topic, consume from N

2016-09-15 Thread Ali Akhtar
It sounds like you can implement the 'mapping service' component yourself using Kafka. Have all of your messages go to one kafka topic. Have one consumer group listening to this 'everything goes here' topic. This consumer group acts as your mapping service. It looks at each message, and based on

Re: What is the fair setup of Kafka to be comparable with NATS or NSQ?

2016-09-15 Thread Ben Davison
Hi Kant, I was following the other thread, can you try using a different benchmarking client for a test. https://grey-boundary.io/load-testing-apache-kafka-on-aws/ Ben On Thursday, 15 September 2016, kant kodali wrote: > with Kafka I tried it with 10 messages with single broker and only one >

Re: What is the fair setup of Kafka to be comparable with NATS or NSQ?

2016-09-15 Thread kant kodali
Hi Ben, I can give that a try but can you tell me the suspicion or motivation behind it? other words you think single partition and single broker should be comparable to the setup I had with NATS and NSQ except you suspect the client library or something? Thanks,Kant On Thu, Sep 15, 2016 2:1

Re: What is the fair setup of Kafka to be comparable with NATS or NSQ?

2016-09-15 Thread Ben Davison
Yup, I suspect the client library. On Thu, Sep 15, 2016 at 10:28 AM, kant kodali wrote: > Hi Ben, > I can give that a try but can you tell me the suspicion or motivation > behind it? > other words you think single partition and single broker should be > comparable to > the setup I had with NATS

Re: What is the fair setup of Kafka to be comparable with NATS or NSQ?

2016-09-15 Thread kant kodali
Hi Ben, Also the article that you pointed out clearly shows the setup had multiple partitions and multiple workers and so on..that's whyat this point my biggest question is what is the fair setup for Kafka so its comparable with NATS and NSQ? and since you suspect the client Library I can give tha

Re: no kafka controllers in zookeeper

2016-09-15 Thread Francesco laTorre
Hi, *get* /controller is the way to go, thought there should have been children. Cheers, Francesco On 13 September 2016 at 15:29, Francesco laTorre < francesco.lato...@openbet.com> wrote: > Hi, > > Anyone else finding this weird ? > > Cheers, > Francesco > > On 12 September 2016 at 15:53, Fran

which port should I use 9091 or 9092 or 2181 to send messages through kafka when using a client Library?

2016-09-15 Thread kant kodali
which port should I use 9091 or 9092 or 2181 to send messages through kafka when using a client Library? I start kafka as follows: sudo bin/zookeeper-server-start.sh config/zookeeper.propertiessudo ./bin/kafka-server-start.sh config/server.properties and I dont see any process running on 9091 or

Re: which port should I use 9091 or 9092 or 2181 to send messages through kafka when using a client Library?

2016-09-15 Thread Ali Akhtar
Examine server.properties and see which port you're using in there On Thu, Sep 15, 2016 at 3:52 PM, kant kodali wrote: > which port should I use 9091 or 9092 or 2181 to send messages through kafka > when using a client Library? > I start kafka as follows: > sudo bin/zookeeper-server-start.sh con

Re: which port should I use 9091 or 9092 or 2181 to send messages through kafka when using a client Library?

2016-09-15 Thread kant kodali
I haven't changed anything from https://github.com/apache/kafka/blob/trunk/config/server.properties and it looks like it is pointing to zookeeper. Question: Does producer client need to point 9092 and Consumer need to point 2181? is that the standard? Why not both point to the same thing? On

Re: which port should I use 9091 or 9092 or 2181 to send messages through kafka when using a client Library?

2016-09-15 Thread UMESH CHAUDHARY
No that is not required, when you use new consumer API. You have to specify bootstrap.servers, which will have 9092 (for PLAINTEXT usually ). In old consumer API you need zookeeper server which points on 2181. On Thu, 15 Sep 2016 at 17:03 kant kodali wrote: > I haven't changed anything from > ht

Re: which port should I use 9091 or 9092 or 2181 to send messages through kafka when using a client Library?

2016-09-15 Thread kant kodali
@Umesh According to these examples it looks like producer and consumer specifies bootstrap.servers. What is PLAINTEXT? do I need to change something here https://github.com/apache/kafka/blob/trunk/config/server.properties ? because when I specify port 9092 for both producer consumer or just either

Re: Publish to 1 topic, consume from N

2016-09-15 Thread Marko Bonaći
1. You can create N topics 2. You control from producer where each message goes 3. You have consumer that fetches from M different topics: https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#subscribe(java.util.Collection) Isn't this architecture flexible eno

compacted log key limits

2016-09-15 Thread Wesley Chow
Is there any guidance on a maximum number different keys in a compacted log? Such total numbers, or "keys need to fit in memory, message data does not", etc. Is it unreasonable to expect tens or hundreds of millions of keys in a single topic to be handled gracefully? Thanks, Wes

Partition creation issues (0.9.0.1)

2016-09-15 Thread Apurva Sharma
Kafka Topic Creation issues (via Kafka-Manager with auto.create.topics.enable = false) Version: 0.9.0.1 We created a topic "web" via Kafka-Manager (our brokers are configured for autocreate to be false) and then clicked on Generate Partitions and according to the tool, the topic has been created c

Schema for jsonConverter

2016-09-15 Thread Srikrishna Alla
I am trying to use jdbc connector to send records from Kafka 0.9 to DB. I am using jsonConverter to convert the records. My connector is failing when its checking the Schema I am using. Please let me know what is the issue with my json schema. Configuration used: key.converter=org.apache.kafka.con

Re: Schema for jsonConverter

2016-09-15 Thread Gwen Shapira
Most people use JSON without schema, so you should probably change your configuration to: key.converter.schemas.enable=false value.converter.schemas.enable=false On Thu, Sep 15, 2016 at 4:04 PM, Srikrishna Alla wrote: > I am trying to use jdbc connector to send records from Kafka 0.9 to DB. I >

Re: Schema for jsonConverter

2016-09-15 Thread Gwen Shapira
ah, never mind - I just noticed you do use a schema... Maybe you are running into this? https://issues.apache.org/jira/browse/KAFKA-3055 On Thu, Sep 15, 2016 at 4:20 PM, Gwen Shapira wrote: > Most people use JSON without schema, so you should probably change > your configuration to: > > key.conve

Old replica data does not clear following the retention.ms property

2016-09-15 Thread Anish Mashankar
I recently ran partition assignment on some topics. This made the replicas of some partitions move around the cluster. It was seamless. However, when it came to purging old logs following the retention.ms property of the topic, the replica partitions were not clear. The leader partition, however, w