Re: Kafka over Satellite links

2016-03-02 Thread Christian Csar
I would not do that. I admit I may be a bit biased due to working for Buddy Platform (IoT backend stuff including telemetry collection), but you want to send the data via some protocol (HTTP? MQTT? COAP?) to the central hub and then have those servers put the data into Kafka. Now if you want to use

Kafka over Satellite links

2016-03-02 Thread Jan
Hi folks;  does anyone know of Kafka's ability to work over Satellite links. We have a IoT Telemetry application that uses Satellite communication to send data from remote sites to a Central hub.  Any help/ input/ links/ gotchas would be much appreciated.  Regards,Jan

Greets To team with a issue

2016-03-02 Thread Arvind Sharma
Hi Team , First of all my warm regards to each and everyone in the community . We are using kafka for centralizing the logging through our php application . Earlier our application was writing all the logs in a file in json line format , which we used to read using filebeat ( ELK family member )

Re: migrating the main-page docs to gitbook format

2016-03-02 Thread Christian Posta
For sure! Will take a look! On Wednesday, March 2, 2016, Gwen Shapira wrote: > Hey! > > Yes! We'd love that too! Maybe you want to help us out with > https://issues.apache.org/jira/browse/KAFKA-2967 ? > > Gwen > > On Wed, Mar 2, 2016 at 2:39 PM, Christian Posta > > wrote: > > Would love to have

Re: About the number of partitions

2016-03-02 Thread BYEONG-GI KIM
Dear James, Thank you for the information indeed! That's very helpful for me to understand much more deeply about kafka. Best regards Kim 2016-03-03 3:29 GMT+09:00 James Cheng : > Kim, > > Here's a good blog post from Confluent with advice on how to choose the > number of partitions. > > > ht

Re: About the number of partitions

2016-03-02 Thread BYEONG-GI KIM
Dear Jens Thank you for the reply! It's really hard to decide how many brokers/partitions are optimal for a system. Is there any good reports or documents about that? I'd like to know some examples related to the optimization, especially on product level environment. Thank you in advance. Best

Re: migrating the main-page docs to gitbook format

2016-03-02 Thread Gwen Shapira
Hey! Yes! We'd love that too! Maybe you want to help us out with https://issues.apache.org/jira/browse/KAFKA-2967 ? Gwen On Wed, Mar 2, 2016 at 2:39 PM, Christian Posta wrote: > Would love to have the docs in gitbook/markdown format so they can easily > be viewed from the source repo (or mirror

migrating the main-page docs to gitbook format

2016-03-02 Thread Christian Posta
Would love to have the docs in gitbook/markdown format so they can easily be viewed from the source repo (or mirror, technically) on github.com. They can also be easily converted to HTML, have a side-navigation ToC, and still be versioned along with the src code. Thoughts? -- *Christian Posta* t

RE: producer SSL issue - fails with ssl.trustore location/password is not valid

2016-03-02 Thread Martin Gainty
> From: aravindan.ramachand...@gmail.com > Date: Wed, 2 Mar 2016 12:28:41 -0800 > Subject: producer SSL issue - fails with ssl.trustore location/password is > not valid > To: users@kafka.apache.org > > INFO Registered broker 10 at path /brokers/ids/10 with addresses: PLAINTEXT > -> EndPoint(ka

producer SSL issue - fails with ssl.trustore location/password is not valid

2016-03-02 Thread aravindan ramachandran
INFO Registered broker 10 at path /brokers/ids/10 with addresses: PLAINTEXT -> EndPoint(kafka-ap-server101.com,9094,PLAINTEXT),SSL -> EndPoint( kafka-ap-server101.com,9095,SSL) (kafka.utils.ZkUtils) -bash-4.1$ bin/kafka-console-producer.sh --broker-list kafka-ap-server101.com:9095 --topic topic123

Announcing rdkafka-dotnet - C# Apache Kafka client

2016-03-02 Thread Andreas Heider
Hi, I was really missing a high-quality Kafka client for C#/F#, so I built one: https://github.com/ah-/rdkafka-dotnet It’s based on the fantastic librdkafka, so it supports pretty much everything you might want: - High performance (I'm getting ~1 million msgs/second producing/consuming on my l

Re: Large Size Error even when the message is small

2016-03-02 Thread Fang Wong
Also the key serializer is org.apache.kafka.common.serialization.StringSerializer and value serializer = org.apache.kafka.common.serialization.ByteArraySerializer. On Wed, Mar 2, 2016 at 10:24 AM, Fang Wong wrote: > try (ByteArrayOutputStream bos = new ByteArrayOutputStream()) { > > try (Da

RE: 0.9.0.1 Kafka Java client Producer hangs while requesting metadata

2016-03-02 Thread Muthukumaran K
With metadata.fetch.timeout.ms=1, Producer sending goes through. Not sure if this is going to affect anything else. Now trying to figure-out why KafkaConsumer.poll(1) never returns. Regards Muthu -Original Message- From: Muthukumaran K [mailto:muthukumara...@ericsson.com] Sent: W

Consumer deadlock

2016-03-02 Thread Oleg Zhurakousky
Guys We have a consumer deadlock and here is the relevant dump: "Timer-Driven Process Thread-10" Id=76 TIMED_WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@39775787 at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSuppor

Re: About the number of partitions

2016-03-02 Thread James Cheng
Kim, Here's a good blog post from Confluent with advice on how to choose the number of partitions. http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/ -James > On Mar 1, 2016, at 4:11 PM, BYEONG-GI KIM wrote: > > Hello. > > I have questions about how

Re: due to invalid request: Request of length

2016-03-02 Thread Fang Wong
Thanks Anirudh! I didn't send a large message, so most likely an encoding issue, how to fix the encoding issue? I googled, found one link: https://github.com/st23am/ExKafka/issues/4 But we are using Java, I couldn't see how to do this line: IO.iodata_to_binary(iodata) Thanks, Fang On Tue, Mar 1,

Re: Large Size Error even when the message is small

2016-03-02 Thread Fang Wong
try (ByteArrayOutputStream bos = new ByteArrayOutputStream()) { try (DataOutputStream dos = new DataOutputStream(bos)) { // First element is the timestamp dos.writeLong(System.currentTimeMillis()); // Second element is the class name, this element is used for deserializing the

0.9.0.1 Kafka Java client Producer hangs while requesting metadata

2016-03-02 Thread Muthukumaran K
Hi, Trying a very simple Producer with following code. "producer.send" hangs indefinitely. Attaching thread-dump snippet so that I can get some advice if something is wrong with my code or configuration. Kafka runs in a VM and producer runs in host - it's a single-broker setup for basic testin

Re: Unit tests of Kafka application

2016-03-02 Thread craig w
Kafka includes http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/MockConsumer.html http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/producer/MockProducer.html On Wed, Mar 2, 2016 at 12:50 PM, Madhire, Naveen < naveen.madh...@capitalone.com> wrote: > Hi, > > I want

Re: session.timeout.ms limit

2016-03-02 Thread Jay Kreps
Hey Gligor, Sorry for the rough edges. I think there are a variety of rough edges in error messages here we can improve: 1. "Error ILLEGAL_GENERATION occurred while committing offsets for group MetadataConsumerSpout" is obviously NOT the most intuitive error message, it doesn't really ex

Unit tests of Kafka application

2016-03-02 Thread Madhire, Naveen
Hi, I want to write unit test cases for testing kafka application. Is there any specific kafka-junit or something which creates dummy producers and consumers to use in unit tests? Thanks. The information contained in this e-mail is confi

Re: session.timeout.ms limit

2016-03-02 Thread Martin Skøtt
Hi, Regarding item 1, the maximum can be configured in the broker by changing group.max.session.timeout.ms. Regards, Martin On 2 March 2016 at 15:09, Gligor Vanessa wrote: > Hello, > > I am using Kafka higher consumer 0.9.0. I am not using the auto commit for > the offsets, so after I consume t

Seek to invalid offset, new consumer

2016-03-02 Thread Giidox
Hi all! I am using the new consumer API by manually assigning partitions. I’m having some trouble with seek. When I seek with a valid offset, poll works ok. However, if I call seek with an offset that is so small that the broker no longer has that offset, poll returns no records. Is there a wa

session.timeout.ms limit

2016-03-02 Thread Gligor Vanessa
Hello, I am using Kafka higher consumer 0.9.0. I am not using the auto commit for the offsets, so after I consume the messaged (poll from Kafka) I will have to commit the offsets manually. The issue that I have is actually that the processing of the messages takes longer than 30s (and I cannot ca

Re: Consumers doesn't always poll first messages

2016-03-02 Thread Jan Omar
Hi Robin, Why would you expect it to start from the first message? You're comitting the read offsets automatically every second. The offset is persisted, next time you consume again, it will start at the persisted offset again. consumerProperties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,

Re: session.timeout.ms limit - Kafka Consumer

2016-03-02 Thread Olson,Andrew
This topic is currently being discussed at https://issues.apache.org/jira/browse/KAFKA-2986 and https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records On 3/2/16, 8:11 AM, "Vanessa Gligor" wrote: >Hello, > >I am using Kafka higher consumer 0.9.0. I am not using t

Consumers doesn't always poll first messages

2016-03-02 Thread Péricé Robin
Hello everybody, I'm testing the new 0.9.0.1 API and I try to make a basic example working. *Java code* : *Consumer *: http://pastebin.com/YtvW0sz5 *Producer *: http://pastebin.com/anQay9YE *Test* : http://pastebin.com/nniYLsHL *Kafka configuration* : *Zookeeper propertie*s : http://pastebin.

session.timeout.ms limit - Kafka Consumer

2016-03-02 Thread Vanessa Gligor
Hello, I am using Kafka higher consumer 0.9.0. I am not using the auto commit for the offsets, so after I consume the messaged (poll from Kafka) I will have to commit the offsets manually. The issue that I have is actually that the processing of the messages takes longer than 30s (and I cannot ca

Kafka 0.9.0.1 new consumer - when no longer considered beta?

2016-03-02 Thread Sean Morris (semorris)
What are the plans for removing the "beta" label from the new consumer APIs? Will that be in another 0.9.0.x release? I assume since it has the "beta" label it should not be used in a production environment. Thanks.

Re: Large Size Error even when the message is small

2016-03-02 Thread Asaf Mesika
Can you show your code for sending? On Tue, 1 Mar 2016 at 21:59 Fang Wong wrote: > [2016-02-26 20:33:42,997] INFO Closing socket connection to /x due to > invalid request: Request of length 1937006964 is not valid, [2016-02-26 > 20:33:42,997] INFO Closing socket connection to /10.224.146.58 due

Re: About the number of partitions

2016-03-02 Thread Jens Rantil
Hi Kim, You are correct in that the number of partitions sets the upper limit on consumer parallelization. That is, a single consumer in a group can consume multiple partitions, however multiple consumers in a group can't consume a single partition. Also, since partitions are spread across your b

Greets To team with a issue .

2016-03-02 Thread Arvind Sharma
Hi Team , First of all my warm regards to each and everyone in the community . We are using kafka for centralizing the logging through our php application . Earlier our application was writing all the logs in a file in json line format , which we used to read using filebeat ( ELK family member )

Consumer Offsets Topic cleanup.policy

2016-03-02 Thread Achanta Vamsi Subhash
Hi all, We have a __consumer_offsets topic has cleanup.policy=compact and log.cleaner.enable=false. What would happen if we change the cleanup.policy to delete? Will that treat the offsets topic as same as any other topic? We currently have a setup without log.cleaner.enable=false and we have off

does leader partition block ? 0.8.2

2016-03-02 Thread Anishek Agarwal
Hello, We have 4 topics deployed on 4 node kafka cluster. For one of the topic we are trying to read data from beginning, using the kafka high level consumer. the topic has 32 partitions and we create 32 streams using high level consumer so that one partition per stream is used, we then have 32