HI Kafka Team,
Please confirm if you would like to open Jira issue to track this ?
Thanks,
Bhavesh
On Mon, Feb 9, 2015 at 12:39 PM, Bhavesh Mistry
wrote:
> Hi Kakfa Team,
>
> We are getting this connection reset by pears after couple of minute aster
> start-up of producer due to infrastructur
Hi Gwen,
This JMX stats is good for calculate injection rate per partition. I do
not have to depend on ZK to figuring out who is leader what is latest
offset.
One quick question, what is Size # ? is it # of bytes particular
partition has on disk ? Unfortunately, MBean description is very l
Hi, Jun
I create a Jira issue to track it. Please have a look at it.
https://issues.apache.org/jira/browse/KAFKA-1939
Thanks.
Xinyi
On 10 February 2015 at 10:38, Jun Rao wrote:
> The new producer uses a different built-in metrics package. Currently, it
> only supports a jmx reporter for the me
Ryco, you are correct: delete topic is a new feature for 0.8.2
2015-02-09 19:53 GMT-08:00 Ryco Xiao :
> when I exec the delete command,return information is below:
> It mark the kafka-topic.sh not support the delete parameter.
> my package is compiled by myself.
>
>
>
>
Hi Mayuresh,
Does it mean , we need to maintain the offset in db, After every message post,
we can check the last offset maintained in db, for e.g it is 20, then on every
message post we can check latest offset present in given topic, if it greater
that 30, then we can add one more entry in d
I don't have strong evidence that this is a bug yet. let me write some test
program and see if I can confirm/reproduce the issue.
On Mon, Feb 9, 2015 at 7:59 PM, Jay Kreps wrote:
> Hmm, that does sound like a bug, we haven't seen that. How easy is it to
> reproduce this?
>
> -Jay
>
> On Mon, Feb
when I exec the delete command,return information is below:
It mark the kafka-topic.sh not support the delete parameter.
my package is compiled by myself.
Hmm, that does sound like a bug, we haven't seen that. How easy is it to
reproduce this?
-Jay
On Mon, Feb 9, 2015 at 5:19 PM, Steven Wu wrote:
> We observed some small discrepancy in messages sent per second reported at
> different points. 1) and 4) matches very close. 2) and 3) matches very
>
The new producer uses a different built-in metrics package. Currently, it
only supports a jmx reporter for the metrics. So you will have to get the
metrics from jmx.
We can add the csv reporter in ProducerPerformance for the new producer by
using the new metrics api. Could you file a jira for that
This exception should be transient (and that is why we captured it as INFO
level log entries) and can be ignored.
We are currently working on the new consumer APIs, and will improve our
logging pattern to avoid such confusing information.
Guozhang
On Mon, Feb 9, 2015 at 5:33 PM, tao xiao wrote:
We observed some small discrepancy in messages sent per second reported at
different points. 1) and 4) matches very close. 2) and 3) matches very
close but are about *5-6% lower* compared to 1) and 4).
1) send attempt from producer
2) send success from producer
3) record-send-rate reported by kafka
It happens every time I shutdown the connector. It doesn't block the
shutdown process though
On Tue, Feb 10, 2015 at 1:09 AM, Guozhang Wang wrote:
> Is this exception transient or consistent and blocking the shutdown
> process?
>
> On Mon, Feb 9, 2015 at 3:07 AM, tao xiao wrote:
>
> > Hi team,
Hey everyone,
I’m hoping someone can help me with an issues I’m having. I’ll be using my
console output so I’m sorry for the console spam. :)
So first, I list my topics:
? kafka_2.11-0.8.2.0 bin/kafka-topics.sh --list --zookeeper localhost:6002
dog
? kafka_2.11-0.8.2.0
And I have a topic ca
a simple nagios check_tcp works fine. as gwen indicated kafka closes the
connection on me, but this is (supposedly) harmless. i see in server logs:
[2015-02-09 19:39:17,069] INFO Closing socket connection to /192.168.1.31.
(kafka.network.Processor)
On Mon, Feb 9, 2015 at 6:06 PM, Scott Clasen wr
Thanks for heads up!
Please consider updating versions in JIRA - 0.8.2 --> 0.8.2.0, and labeling
0.8.2.0 as released.
Kind regards,
Stevo Slavic.
On Wed, Jan 14, 2015 at 6:54 PM, Jun Rao wrote:
> About the versioning, we had released 0.8.1 and 0.8.1.1 before, which is a
> bit inconsistent in t
Hi,
Can somebody provide me with an example of how to formulate an
OffsetCommitRequest for a single stream/partition using SimpleConsumer from
java?
Both ends, really ... periodically issuing commits, but also how to get the
current offset when starting up.
I can show what I'm attempting ... bu
I have used nagios in this manner with kafaka before and worked fine.
On Mon, Feb 9, 2015 at 2:48 PM, Koert Kuipers wrote:
> i would like to be able to ping kafka servers from nagios to confirm they
> are alive. since kafka servers dont run a http server (web ui) i am not
> sure how to do this.
Hi gwen,
Can you share how you do these end to end latency tests? I am more sysadmin
than coder and have wanted to get something like that going for my kafka
clusters. I'd love more details about how you do it, and how you monitor the
results.
Thanks!
Sent from my BlackBerry 10 smartphone on
It's safe.
Just note that if you send Kafka anything it does not like, it will close
the connection on you. This is intentional and doesn't signal an issue with
Kafka.
Not sure if Nagios does this, but I like "canary" tests - produce a message
with timestamp every X seconds and have a monitor tha
i would like to be able to ping kafka servers from nagios to confirm they
are alive. since kafka servers dont run a http server (web ui) i am not
sure how to do this.
is it safe to establish a "test" tcp connection (so connect and immediately
disconnect using telnet or netstat or something like th
Hi All,
I am running kafka-consumer-perf.test.sh in my test envrionment to simulate
"consumer" load on my kafka broker.
Currently i have millions of entries in the log on which this shell is
running... I would like to know how can if check if the .sh is running
fine..
All i see is the below en
Hi Kakfa Team,
We are getting this connection reset by pears after couple of minute aster
start-up of producer due to infrastructure deployment strategies we have
adopted from LinkedIn.
We have LB hostname and port as seed server, and all producers are getting
following exception because of TCP i
Anyone would like to give some help? Can't send a keyedMessage to brokers with
partitioner.class=kafka.producer.DefaultPartitioner
| |
| | | | | | | |
| Can't send a keyedMessage to brokers with partitioner.cl...I have a 2 nodes
kafka cluster with 2 instances of brokers and zookee
Thank you for your help, and I apologize for not adding sufficient detail in my
original question. To elaborate on our use case we are trying to create a
system tracing/monitoring app (which is of high importance to our business)
where we are trying to read messages from all of our Kafka topics.
Yea, I think I figured it out. Didn't realize the person doing the test
created the message using the console-consumer, so I think the newline was
escaped.
On Mon Feb 09 2015 at 11:59:57 AM Gwen Shapira
wrote:
> Since the console-consumer seems to display strings correctly, it sounds
> like an i
Is this exception transient or consistent and blocking the shutdown process?
On Mon, Feb 9, 2015 at 3:07 AM, tao xiao wrote:
> Hi team,
>
> I got java.nio.channels.ClosedByInterruptException when
> closing ConsumerConnector using kafka 0.8.2
>
> Here is the exception
>
> 2015-02-09 19:04:19 INFO
Since the console-consumer seems to display strings correctly, it sounds
like an issue with LogStash parser. Perhaps you'll have better luck asking
on LogStash mailing list?
Kafka just stores the bytes you put in and gives the same bytes out when
you read messages. There's no parsing or encoding d
Hello CJ,
You have to set the fetch size to be >= the maximum message size possible,
otherwise the consumption will block upon encountering these large messages.
I am wondering by saying "poor performance" what do you mean exactly? Are
you seeing low throughput, and can you share your consumer c
Jay,
Thanks I'll look at that more closely.
On Sat, Feb 7, 2015 at 1:23 PM, Jay Kreps wrote:
> Steve
>
> In terms of mimicing the sync behavior, I think that is what .get() does,
> no?
>
> We are always returning the offset and error information. The example I
> gave didn't make use of it, but y
If you mean setting up Kafka on ec2:
https://www.youtube.com/watch?v=ArUHr3Czx-8
the commands may differ depending on which type of ec2 instance you are
using.
Also: http://kafka.apache.org/documentation.html#introduction
On Mon, Feb 9, 2015 at 4:53 AM, Sharath N wrote:
> Hi, any one please h
Hi, any one please help me how to integrate kafka and ec2..
Thanks and Regards
Sharath N
So, avoiding a bit of a long explanation on why I'm doing it this way...
But essentially, I am trying to put multi-line messages into kafka and then
parse them in logstash.
What I think I am seeing in kafka (using console-consumer) is this:
"line 1 \nline 2 \nline 3\n"
Then when I get it into l
Yep, still applicable.
They will do the same thing (commit offset on regular intervals) only with
Kafka instead of Zookeeper.
On Mon, Feb 9, 2015 at 2:57 AM, tao xiao wrote:
> Hi team,
>
> If I set offsets.storage=kafka can I still use auto.commit.enable to turn
> off auto commit and auto.commi
I'm guessing the upgrade changed your broker configuration file
(server.properties).
Perhaps take a look and see if things like max.message.bytes are still
where you want them?
Gwen
On Sun, Feb 8, 2015 at 11:24 AM, Ricardo Ferreira <
jricardoferre...@gmail.com> wrote:
> Hi Gwen,
>
> Sorry, both
Hi CJ,
I recently ran into some kafka message size related issue and did some
digging around to understand the system. I will put those details in brief
and hope it will help you.
Each consumer connector has fetcher threads and fetcher manager threads
associated with it. The Fetcher thread talks t
Hi team,
I got java.nio.channels.ClosedByInterruptException when
closing ConsumerConnector using kafka 0.8.2
Here is the exception
2015-02-09 19:04:19 INFO kafka.utils.Logging$class:68 -
[test12345_localhost], ZKConsumerConnector shutting down
2015-02-09 19:04:19 INFO kafka.utils.Logging$clas
Hi team,
If I set offsets.storage=kafka can I still use auto.commit.enable to turn
off auto commit and auto.commit.interval.ms to control commit interval ? As
the documentation mentions that the above two properties are used to
control offset to zookeeper.
--
Regards,
Tao
Any help on this subject please ?
2015-02-05 10:45 GMT+01:00 Anthony Pastor :
> We're using Kafka 0.8.1.1 on debian 7.7
>
> - Logs when i migrate a specific topic (~20GB) from kafka5 to kafka2 (No
> problem that way):
> - controller.log: No logs.
>
> - Logs when i migrate the same specific to
Can you post the exception stack-trace?
On Mon, Feb 9, 2015 at 2:58 PM, Gaurav Agarwal
wrote:
> hello
> We are sending custom message across producer and consumer. But
> getting class cast exception . This is working fine with String
> message and string encoder.
> But this did not work with cus
hello
We are sending custom message across producer and consumer. But
getting class cast exception . This is working fine with String
message and string encoder.
But this did not work with custom message , i got class cast
exception. I have a message with couple of String attributes
High level consumer of 0.8.1 works fine with 0.8.2. In extra, you can
change the config to use kafka for offsets storage instead of zookeeper.
There are some extra config parameters added as well as explained in the
wiki.
http://kafka.apache.org/documentation.html#consumerconfigs
For low-level con
41 matches
Mail list logo