Hi
We are experimenting with running kafka server on a windows machine, but keep
getting exeptions when producing a lot of messages (in the neighborhood of 1
million)..
kafka.message.InvalidMessageException: Message is corrupt (stored crc = 29639294
31, computed crc = 2364631640)
at
Neha,
Looks like an issue with the consumer rebalance not able to complete
successfully. We were able to reproduce the issue on topic with 30
partitions, 3 consumer processes(p1,p2 and p3), properties - 40
rebalance.max.retries and 1(10s) rebalance.backoff.ms.
Before the process p3 was
Hi Kafka users,
we're running a cluster of two Kafka 0.8.1.1 brokers, with a twofold
replicaton of each topic.
When both brokers are up, after a short while the FetchRequestPurgatory
starts to grow indefinitely on the leader (detectable via a heap dump
and also via the
Hi
It appears to me that, when I have opened a console producer, it will not
notice the topic’s partition change?
say I have a topic with 3 partitions. And I opened a console producer to
produce data with key, then it will distribute data into 3 partitions. Then , I
keep this producer open,
Hi Guys,
Anyone could explain me how to work Kafka with Spark, I am using the
JavaKafkaWordCount.java like a test and the line command is:
./run-example org.apache.spark.streaming.examples.JavaKafkaWordCount
spark://192.168.0.13:7077 computer49:2181 test-consumer-group unibs.it 3
and like a
When running a C# producer against a kafka 0.8.1.1 server running on a virtual
linux (virtualbox, Ubuntu) I keep getting the following error:
[2014-11-03 15:19:08,595] ERROR [KafkaApi-0] Error processing ProducerRequest
with correlation id 601 from client Kafka-Net on partition [x,0]
Yingkai,
Kafka uses persistent storage so the data written to it will not be lost,
you just need to restart the cluster. But during the down time it will
become un-available.
Guozhang
On Fri, Oct 31, 2014 at 2:06 PM, Yingkai Hu yingka...@gmail.com wrote:
Hi All,
I’m new to Kafka, please
Hi,
Producers will only refresh their metadata periodically if there is no
exceptions caught sending the data, you can config this period in
topic.metadata.refresh.interval.ms (default is 600 seconds).
Guozhang
On Mon, Nov 3, 2014 at 6:51 AM, raymond rgbbones.m...@gmail.com wrote:
Hi
It
Hi Zuoning,
since compression is a per-message(set) attribute a topic can have both
compressed and uncompressed messages, as Guozhang says,
and yes, this is supported by both the broker and client (librdkafka in
this case).
Regards,
Magnus
2014-10-31 17:14 GMT+01:00 Zuoning Yin
Thanks for the clarification, Magnus!
On Mon, Nov 3, 2014 at 11:15 AM, Magnus Edenhill mag...@edenhill.se wrote:
Hi Zuoning,
since compression is a per-message(set) attribute a topic can have both
compressed and uncompressed messages, as Guozhang says,
and yes, this is supported by both the
Neha,
In my last reply, the subject got changed thats why it got marked as new
message on
http://mail-archives.apache.org/mod_mbox/kafka-users/201411.mbox/date.
Please ignore that. Below text is the reply in continuation to
Not sure about the throughput, but:
I mean that the words counted in spark should grow up - The spark
word-count example doesn't accumulate.
It gets an RDD every n seconds and counts the words in that RDD. So we
don't expect the count to go up.
On Mon, Nov 3, 2014 at 6:57 AM, Eduardo Costa
Hi, All
I am running the kafka producer code:
import java.util.*;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class TestProducer {
public static void main(String[] args) {
long events =
Are there any config parameter updates/changes? I see the doc here:
http://kafka.apache.org/documentation.html#configuration
now defaults to 0.8.2-beta. But it would be useful to know if anything has
changed from 0.8.1.1, just so we can be sure to update things, etc.
On Sat, Nov 1, 2014 at
Also, that doc refers to the 'new producer' as available in trunk and of
beta quality.
But from the announcement, it seems it's now more properly integrated in
the release? Also, where can I read about the 'kafka-client' referred to
above?
Thanks,
Jason
On Mon, Nov 3, 2014 at 4:46 PM, Jason
Hi,
How do people handle situations, and specifically the broker.id property,
where the Kafka (broker) cluster is not fully defined right away?
Here's the use case we have at Sematext:
* Our software ships as a VM
* All components run in this single VM, including 1 Kafka broker
* Of course, this
I am automating Kafka clusters in the cloud and I am using Ips (stripped to
keep only numbers) to define the broker ids.
You might have to add a shell script or whatever fits to set the broker id.
I hope this might help a bit.
Regards,
Le 3 nov. 2014 23:04, Otis Gospodnetic
KAFKA-1070 will help with this and is pending a review.
On Mon, Nov 03, 2014 at 05:03:20PM -0500, Otis Gospodnetic wrote:
Hi,
How do people handle situations, and specifically the broker.id property,
where the Kafka (broker) cluster is not fully defined right away?
Here's the use case we
Yes there are some changes but will be checked-in prior to the full
release:
https://issues.apache.org/jira/browse/KAFKA-1728
Joel
On Mon, Nov 03, 2014 at 04:46:12PM -0500, Jason Rosenberg wrote:
Are there any config parameter updates/changes? I see the doc here:
Most folks strip the IP and use that as the broker.id. KAFKA-1070 does not
yet accommodate for that very widely used method. I think it would be bad
if KAFKA-1070 only worked for new installations because that is how people
use Kafka today (per
+1
Thats what we use to generate broker id in automatic deployments.
This method makes troubleshooting easier (you know where each broker is
running), and doesn't require keeping extra files around.
On Mon, Nov 3, 2014 at 2:17 PM, Joe Stein joe.st...@stealth.ly wrote:
Most folks strip the IP
I agree it would be really nice to get KAFKA-1070 figured out.
FWIW, the reason for having a name or id other than ip was to make it
possible to move the identity to another physical server (e.g. scp the data
directory) and have it perform the same role on that new piece of hardware.
Systems that
Koert,
The java consumer in 0.8.2 beta only has the api and hasn't been
implemented yet. The implementation will likely complete in 0.9.
Thanks,
Jun
On Sat, Nov 1, 2014 at 8:18 AM, Koert Kuipers ko...@tresata.com wrote:
joe,
looking at those 0.8.2 beta javadoc I also see a Consumer api and
Erik,
It seems that we can customized the mbean names with Metrics 2.2.0. Any
other reasons that we need to downgrade to Metrics 2.1.5?
Thanks,
Jun
On Sun, Nov 2, 2014 at 12:10 PM, Erik van Oosten
e.vanoos...@grons.nl.invalid wrote:
Hi Jun,
The quotes are because of a regression in
Bhavesh,
That example has a lot of code. Could you provide a simpler test that
demonstrates the problem?
Thanks,
Jun
On Fri, Oct 31, 2014 at 10:07 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com
wrote:
Hi Jun,
Here is code base:
It seems that the consumer can't connect to the broker for some reason. Any
other error on the broker? Any issue with the network?
Thanks,
Jun
On Sat, Nov 1, 2014 at 9:46 PM, Chen Wang chen.apache.s...@gmail.com
wrote:
Hello Folks,
I am using Highlevel consumer, and it seems to drop
Koert, these two classes belong to the 0.9 consumer api, which are not dev
ready yet. We only checked in the apis so people can review | comment on.
Guozhang
On Nov 1, 2014 8:26 AM, Koert Kuipers ko...@tresata.com wrote:
joe,
looking at those 0.8.2 beta javadoc I also see a Consumer api and
Are you using the java producer?
Thanks,
Jun
On Mon, Nov 3, 2014 at 3:31 AM, Fredrik S Loekke f...@lindcapital.com
wrote:
Hi
We are experimenting with running kafka server on a windows machine, but
keep getting exeptions when producing a lot of messages (in the
neighborhood of 1
Hi,
We are using an async producer to send data to kafka.
The load the sender handles is around 250 rps ,the size of a message is around
25K.
The configs used in the producer are :
request.required.acks=0
producer.type=async
batch.num.messages=10
topic.metadata.refresh.interval.ms=3
29 matches
Mail list logo