Hi Kafka Team,
I need to count message received by entire Kafka Broker Cluster for a
particular topic.
I have 3 brokers, so do I need to sum the COUNT metric or just one server
count reflect all server count. It seems that count is always increasing
(although metric name is *MessagesInPerSec* so
Hi,
i want know in which situation does kafka send same event multiple times
to consumer. Is there a consumer side configuration to tell kafka to send
only once and stop retries?
--
Regards
Tousif Khazi
Hi,
I discovered that the new mirror maker implementation in trunk now only
accept one consumer.config property instead of a list of them which means
we can only supply one source per mirror maker process. Is it a reason for
it? If I have multiple source kafka clusters do I need to setup multiple
Hi all ,
My application creates kafka topic at runtime with AdminUtils.createTopic
but that topic is not available when I try to produce to it.
If I run "bin/kafka-console-consumer.sh --topic $topic --zookeeper $zkStr
--from-beginning" it would throw UnknownTopicOrPartitionException.
Also, I fou
Hi Yang,
I think my problem is not the same with yours.
My production environment is fine, log.retention.hours is 720. My disk is
almost full just beacause of too much data.
I want to have a utility or command to delete manually, instead of waiting
until it expires.
Thanks for your reply.
Than
YuanJia,
I want to know why your broker's disk is almost full . Maybe your issue is
the same with mine. Is it one broker out of service for a long time, then
it fails back, at first it looks good, but after some hours, its disk is
almost full. Other brokers are all fine, avg remaining capacity is
It sounds like you have two zookeepers, one for HDP and one for Kafka.
Did you move Kafka from one zookeeper to another?
Perhaps Kafka finds the topics (logs) on disk, but they do not exist
in ZK because you are using a different zookeeper now.
Gwen
On Thu, Jan 22, 2015 at 6:38 PM, Jun Rao wrot
Any error in the controller and the broker log?
Thanks,
Jun
On Thu, Jan 22, 2015 at 1:33 AM, wrote:
> Hi,
> Let me overview on the issue that I am facing on producing message in
> Kafka:
> I have horthonworks HDP-2.1 installed, along that we have Kafka on other
> node.
>
> * On kafka node:
> S
In general, you shouldn't delete any files while the broker is up. If you
have no other choices, you can try deleting the older log segments and hope
that no consumer or the log cleaner are using them.
Thanks,
Jun
On Thu, Jan 22, 2015 at 12:40 AM, YuanJia Li wrote:
> Hi all,
> The kafka broker
Perhaps you can upgrade all brokers and then try?
Thanks,
Jun
On Wed, Jan 21, 2015 at 9:53 PM, Raghu Udiyar wrote:
> No errors in the state-change log or the controller. Its as if the
> controller never got the request for that partition.
>
> Regarding the upgrade, we did upgrade one of the no
Hmm, kafka-console-consumer in 0.8.2 rc2 is running fine. Do you have
multiple kafka jars in your classpath?
Thanks,
Jun
On Thu, Jan 22, 2015 at 4:58 PM, Jason Rosenberg wrote:
> 2015-01-23 00:55:25,273 WARN [async-message-sender-0] common.AppInfo$ -
> Can't read Kafka version from MANIFEST.M
2015-01-23 00:55:25,273 WARN [async-message-sender-0] common.AppInfo$ -
Can't read Kafka version from MANIFEST.MF. Possible cause:
java.lang.NullPointerException
Hi, Guozhang
Can I run this package remotely test another server? which mean I run this
package on dev but testing kafka system on production?
thanks
AL
On Thu, Jan 22, 2015 at 2:55 PM, Sa Li wrote:
> Hi, Guozhang,
>
> Good to know such package, will try it now. :-)
>
> thanks
>
> On Thu, Jan
Hi, Guozhang,
Good to know such package, will try it now. :-)
thanks
On Thu, Jan 22, 2015 at 2:40 PM, Guozhang Wang wrote:
> Hi Sa,
>
> Have you looked into the system test package? It contains a suite of tests
> on different failure modes of Kafka brokers.
>
> Guozhang
>
>
> On Thu, Jan 22, 2
Hi Sa,
Have you looked into the system test package? It contains a suite of tests
on different failure modes of Kafka brokers.
Guozhang
On Thu, Jan 22, 2015 at 12:00 PM, Sa Li wrote:
> Hi, All
>
> We are about to deliver kafka production server, I have been working on
> different test, like p
Hey Joe, with other input types (like file) one can reference things like
the path in the filter section.
Is it possible to refer to the topic_id in the filter section? I tried and
nothing obvious worked.
We are encoding a few things (like host name, and type) in the name of the
topic, and would
Yep, sorry, had a rough day..
On Thursday, January 22, 2015 2:25 PM, Guozhang Wang
wrote:
Hi Zijing,
Sounds like you sent to the wrong mailing list :P
Guozhang
On Thu, Jan 22, 2015 at 11:12 AM, Zijing Guo
wrote:
> HiI'm using Apache Spark 1.1.0 and I'm currently having issue wit
Hi, All
We are about to deliver kafka production server, I have been working on
different test, like performance test from linkedin. This is a 3-node
cluster, with 5 nodes zkEnsemble. I assume there are lots of tests I need
to do, like network, node failure, flush time, etc. Is there is completed
Hi Zijing,
Sounds like you sent to the wrong mailing list :P
Guozhang
On Thu, Jan 22, 2015 at 11:12 AM, Zijing Guo
wrote:
> HiI'm using Apache Spark 1.1.0 and I'm currently having issue with
> broadcast method. So when I call broadcast function on a small dataset to a
> 5 nodes cluster, I expe
HiI'm using Apache Spark 1.1.0 and I'm currently having issue with broadcast
method. So when I call broadcast function on a small dataset to a 5 nodes
cluster, I experiencing the "Error sending message as driverActor is null"
after broadcast the variables several times (apps running under jboss)
For now (0.8.2) if you have a hard crash then you will likely have
duplicates or even data loss unfortunately, I do not have a good solution
to this on top of my head.
On Wed, Jan 21, 2015 at 10:14 PM, Madhukar Bharti
wrote:
> Thanks Guozhang for your reply.
>
> Checked the details as mentioned
Hi David,
The "per-topic" configs will just override the global configs for that
specific topic; for the retention.bytes config it will be applied to all
partitions of that topic.
So if you have two topics each with two partitions and replication factor 1
with retention.bytes valued A then the to
Vishal,
Does this error happen every time you are sending? Or just the first time?
Joe Stein
On Thu, Jan 22, 2015 at 4:33 AM, wrote:
> Hi,
> Let me overview on the issue that I am facing on producing message in
> Kafka:
> I have horthonworks HDP-2.1 installed, along that we have Kafka on other
Hi,
Let me overview on the issue that I am facing on producing message in Kafka:
I have horthonworks HDP-2.1 installed, along that we have Kafka on other node.
* On kafka node:
Start Zookeepeer
Start Kafka Broker service
Send message/producer
Consume message - Works (Note: here we start Zookeeper
Just trying to get everything in prior to the 1.5 release.
From: Scott Chapman
Sent: Thursday, January 22, 2015 9:32 AM
To: users@kafka.apache.org
Subject: Re: using the new logstash-kafka plugin
Awesome, what release are you targeting? Or are you able to
Awesome, what release are you targeting? Or are you able to make updates to
the plugin outside of kafka?
On Thu Jan 22 2015 at 9:31:26 AM Joseph Lawson wrote:
> Scott you will have to do just one topic per input right now but multiple
> topics per group, whitelisting and blacklisting just got me
Scott you will have to do just one topic per input right now but multiple
topics per group, whitelisting and blacklisting just got merged into
jruby-kafka and I'm working them up the chain to my logstash-kafka and then
pass it to the logstash-input/output/-kafka plugin.
Hi list,
please help me understand the per-topic retention.* setting (in kafka
0.8.1.1) done by:
bin/kafka-topics.sh --zookeeper $ZK --alter --topic $TOPIC --config
retention.bytes=VALUE
I understand from this:
http://search-hadoop.com/m/4TaT4E9f78/retention.bytes&subj=estimating+log+retent
Hi all,
The kafka broker's disk is almost full, and the log file is still opened by
kafka through lsof command.
I know to change log.retention.hours in server.properties, but I don't want to
restart the kafka server.
Is there any utility to delete log file without impacting kafka?
Thanks & Regar
29 matches
Mail list logo