the overall cluster perf based on
the retention time. Since you already reduced retention to 1hour you should be
good.
Thanks
Zakee
> On Jun 28, 2018, at 2:36 PM, Vignesh wrote:
>
> Hello kafka users,
>
> How to recover a kafka broker from disk full ?
>
> I updated the lo
Did you try with --new-consumer ?
-Zakee
> On Mar 20, 2018, at 10:26 AM, Anand, Uttam wrote:
>
> I am facing an issue while consuming message using the bootstrap-server i.e.
> Kafka server. Any idea why is it not able to consume messages without
> zookeeper?
>
What version of kafka you are on? There are issues with delete topic until I
guess kafka 0.10.1.0
You may be hitting this issue… https://issues.apache.org/jira/browse/KAFKA-2231
<https://issues.apache.org/jira/browse/KAFKA-2231>
Thanks
Zakee
> On Apr 6, 2017, at 11:20 AM, Adria
Hi Nico,
They can be defined at both cluster and topics levels. Refer
https://kafka.apache.org/documentation/#topic-config for topic-level overrides
available.
Cheers!
-Z
> On Mar 8, 2017, at 12:41 PM, Nicolas MOTTE wrote:
>
> Hi everyone,
>
> Is there any reason why retention and cleanup
+1
-Zakee
> On Feb 14, 2017, at 1:56 PM, Jay Kreps wrote:
>
> +1
>
> Nice improvement.
>
> -Jay
>
> On Tue, Feb 14, 2017 at 1:22 PM, Steven Schlansker <
> sschlans...@opentable.com> wrote:
>
>> Hi, it looks like I have 2 of the 3 minimum votes,
Brokers failed repeatedly leaving behind page-cache in memory, which caused
broker restarts to fail with OOM every time.
After manually cleaning up page-cache, I was able to restart the broker.
However, still wondering what could have caused this state in the first place.
Any ideas?
-Zakee
l.map(FileChannelImpl.java:904)
... 28 more
Thanks
-Zakee
No. Newer client API won’t work with older version of broker. Generally, older
client should be able to work with newer broker version.
-Zakee
> On Nov 18, 2016, at 11:58 AM, Weian Deng wrote:
>
> More specifically, Is Kafka Java client 0.9.0.1 compatible with Kafka broker
(part of) data and
maintains its replica state on different nodes.
Same thing on the consumer end, you can start as many parallel processes (or
threads) as there are partitions of a topic, to achieve better throughputs.
-Zakee
> On Nov 16, 2016, at 9:03 AM, Tauzell, Dave
> wrote:
>
>
Yes, offsets are unique per partition.
> I've observed that for example I had offset values equal to zero more times
> then there is the number of Kafka partitions.
Can you elaborate a little more what you observed?
-Zakee
> On Nov 14, 2016, at 10:06 AM, Dominik Safaric
> w
Are these the only logs you see or there are bit more log events before this
that might be relevant?
-Zakee
> On Nov 12, 2016, at 7:00 PM, Vinay Gulani wrote:
>
> Hi,
>
> I am getting below warning message:
>
> WARN kafka.server.KafkaServer - [Kafka Server 1],
-1
Thanks.
> On Oct 25, 2016, at 2:16 PM, Harsha Chintalapani wrote:
>
> Hi All,
> We are proposing to have a REST Server as part of Apache Kafka
> to provide producer/consumer/admin APIs. We Strongly believe having
> REST server functionality with Apache Kafka will help a lot of user
These errors are possibly caused by delete topic (esp. using v0.8.2.x), refer
below issues.
https://issues.apache.org/jira/browse/KAFKA-2231
https://issues.apache.org/jira/browse/KAFKA-1194
The replica fetchers still querying their leaders for partitions of deleted
topic. May need a restart of
Typically "preferred leader election” would fail if/when one or more brokers
still did not come back online after being down for some time. Is that your
scenario?
-Zakee
> On Aug 11, 2016, at 12:42 AM, Sudev A C wrote:
>
> Hi,
>
> With *auto.leader.rebalance.ena
Increasing number of retries and/or retry.backoff.ms will help reduce the data
loss. Figure out how long NLFPE occurs (this happens only as long as metadata
is obsolete), and configure below props accordingly.
message.send.max.retries=3 (default)
retry.backoff.ms=100 (default)
> On Aug 12, 201
Hi Prateek,
Looks like you are using default batch.size which is ~16K and it forces the
send of messages immediately as your single message is larger than that. Try
using larger batch.size.
Thanks
Zakee
> On Oct 14, 2015, at 10:29 AM, prateek arora
> wrote:
>
> Hi
>
>
.
-Zakee
> On Jun 20, 2015, at 4:23 PM, Jiangjie Qin wrote:
>
> It seems that your log.index.size.max.bytes was 1K and probably was too
> small. This will cause your index file to reach its upper limit before
> fully index the log segment.
>
> Jiangjie (Becket) Qin
>
>
(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Thanks
Zakee
Old School Yearbook Pics
View Class Yearbooks Online
With reties 1 you still see the 3 secs delay? The idea is, you can change these
property to reduce the time to throw exception to 1 secs or below. Does that
help?
Thanks
Zakee
> On Apr 28, 2015, at 10:29 PM, Madhukar Bharti
> wrote:
>
> Hi Zakee,
>
>> message.
What values do you have for below properties? Or are these set to defaults?
message.send.max.retries
retry.backoff.ms
topic.metadata.refresh.interval.ms
Thanks
Zakee
> On Apr 23, 2015, at 11:48 PM, Madhukar Bharti
> wrote:
>
> Hi All,
>
> Once gone through code found th
t)
I will try moving the data from full to less used volumes, though it seems an
unclean workaround, not suitable in Prod.
Thanks,
Zakee
> On Mar 21, 2015, at 11:50 AM, svante karlsson wrote:
>
> The shutdown is expected. All data in a partition is kept in a single
> directory (=
/jira/browse/KAFKA-2038
Thanks
Zakee
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School & Year. Look Now!
http://thirdpartyoffers.netzero.net/TGL3231/550db9cebda5839ce0dcbst04vuc
> What version are you running ?
Version 0.8.2.0
> Your case is 2). But the only thing weird is your replica (broker 3) is
> requesting for offset which is greater than the leaders log end offset.
So what could be the cause?
Thanks
Zakee
> On Mar 17, 2015, at 11:45 AM, May
Hi Mayuresh,
The logs are already attached and are in reverse order starting backwards from
[2015-03-14 07:46:52,517] to the time when brokers were started.
Thanks
Zakee
> On Mar 17, 2015, at 12:07 AM, Mayuresh Gharat
> wrote:
>
> Hi Zakee,
>
> Thanks for the logs. Can
Hi Mayuresh,
Here are the logs.
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School & Year. Look Now!
http://thirdpartyoffers.netzero.net/TGL3231/5507ca8137dc94a805e6bst01vucBroker-4
[2015-03-13 17:49:40,514] IN
Hi Mayuresh,
Here are the logs.
Broker-4
[2015-03-13 17:49:40,514] INFO Partition [Topic22kv,5] on broker 4: Shrinking
ISR for partition [Topic22kv,5] from 2,4,3 to 2,4 (kafka.cluster.Partition)
[2015-03-13 17:49:40,514] INFO Partition [Topic22kv,5] on broker 4: Shrinking
ISR for partition [To
Ah, you are right.
Thanks
Zakee
> On Mar 16, 2015, at 2:05 PM, Gwen Shapira wrote:
>
> Your kafka log directory (in config file, under log.dir) contains
> directories that are not KafkaTopics. Possibly hidden directory.
>
> Check what "ls -la" shows in that dire
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Thanks
Zakee
What's your flood
Yes, I don’t have any clients associated with the sending IP.
"a port scanner” is a good clue. Thanks for pointing it out.
Thanks
Zakee
> On Mar 16, 2015, at 10:52 AM, Gwen Shapira wrote:
>
> Kafka currently has request types 0-12.
> If the bytes Kafka got were parse
kafka.network.RequestChannel$Request.(RequestChannel.scala:50)
at kafka.network.Processor.read(SocketServer.scala:450)
at kafka.network.Processor.run(SocketServer.scala:340)
at java.lang.Thread.run(Thread.java:662)
Thanks
Zakee
partitioner.
String message = msg.getString();
String uniqKey = ""+rnd.nextInt();// random key
String partKey = getPartitionKey();// partition key
KeyedMessage data = new KeyedMessage(this.topicName, uniqKey, partKey, message);
producer.send(data);
Thanks
Zakee
> On Mar 14, 201
set 1400864851 (kafka.server.ReplicaFetcherThread)
Thanks
Zakee
> On Mar 9, 2015, at 12:18 PM, Zakee wrote:
>
> No broker restarts.
>
> Created a kafka issue: https://issues.apache.org/jira/browse/KAFKA-2011
> <https://issues.apache.org/jira/browse/KAFKA-2011>
>
>>> Logs
I will try to reproduce by repeating the steps I remember, the next time when I
restart the cluster.
Thanks
Zakee
> On Mar 13, 2015, at 4:34 PM, Jiangjie Qin wrote:
>
> Can you reproduce this problem? Although the the fix is strait forward we
> would like to understand why t
Yes, the leaders are currently speed evenly across five brokers. I also see the
FetchRequestPurgatory.PurgatorySize to be peak as high as ~7.2M and the
suddenly dropping to a couple hundred thousands.
Thanks
Zakee
> On Mar 13, 2015, at 5:27 PM, Joel Koshy wrote:
>
>
Thanks, Mayuresh. I did the same and it fixed the issue.
Thanks
Zakee
> On Mar 13, 2015, at 3:56 PM, Mayuresh Gharat
> wrote:
>
> The index files work in the following way :
> Its a mapping from logical offsets to a particular file location within the
> log file segment.
&
I have 35 topics spread with total 398 partitions (2 of them are supposed to be
very high volume and so allocated 28 partitions to them, others vary between 5
to 14).
Thanks
Zakee
> On Mar 13, 2015, at 3:25 PM, Joel Koshy wrote:
>
> I think what people have observed in the pas
ill be re-created.
> find $your_data_directory -size 10485760c -name *.index #-delete
Thanks
Zakee
> On Mar 13, 2015, at 3:38 PM, Zakee wrote:
>
> I did a shutdown of the cluster and then try to restart and see the below
> error on one of the 5 brokers, I can’t restart this
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2015-03-13 15:27:31,831] INFO [Kafka Server 5], shutting down
(kafka.server.KafkaServer)
Thanks
Zakee
Fast, Secure, NetZero 4G
Hi Mayuresh,
I have currently set this property to 4 and I see from the logs that it starts
12 threads on each broker. I will try increasing it further.
Thanks
Zakee
> On Mar 13, 2015, at 11:53 AM, Mayuresh Gharat
> wrote:
>
> You might want to increase the number of Rep
Sorry, but still confused. Maximum number of threads (fetchers) to fetch from
a Leader or maximum number of threads within a follower broker?
Thanks for clarifying,
-Zakee
> On Mar 12, 2015, at 11:11 PM, tao xiao wrote:
>
> The number of fetchers is configurable via num.replica
wer and leader
replicas, should be less than replica.lag.max.messages (currently set to 5000)
Thanks
Zakee
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School & Year. Look Now!
http://thirdpartyoffers.netz
Zakee
> On Mar 12, 2015, at 11:15 AM, James Cheng wrote:
>
> Ah, I understand now. I didn't realize that there was one fetcher thread per
> broker.
>
> Thanks Tao & Guozhang!
> -James
>
>
> On Mar 11, 2015, at 5:00 PM, tao xiao <mailto:xiaotao...@gmai
2)
Thanks
Zakee
How Old Men Tighten Skin
63 Year Old Man Shares DIY Skin Tightening Method You Can Do From Home
http://thirdpartyoffers.netzero.net/TGL3231/5502041326e2c41148e2st01duc
ead-1-5], Error for
>> partition [Topic-22,9] to broker 5:class
>> kafka.common.NotLeaderForPartitionException
>> (kafka.server.ReplicaFetcherThread)
> Could you paste the
> related logs in controller.log?
What specifically should I search for in the logs?
Thanks,
Kazim Za
1-5], Error for
>> partition [Topic-22,9] to broker 5:class
>> kafka.common.NotLeaderForPartitionException
>> (kafka.server.ReplicaFetcherThread)
> Could you paste the related logs in controller.log?
What specifically should I search for in the logs?
Thanks,
Zakee
>
rThread)
>> [2015-03-07 14:23:28,963] ERROR [ReplicaFetcherThread-2-5], Error for
>> partition [Topic-2,21] to broker 5:class
>> kafka.common.NotLeaderForPartitionException
>> (kafka.server.ReplicaFetcherThread)
>> [2015-03-07
Correction: Actually the rebalance happened quite until 24 hours after the
start, and thats where below errors were found. Ideally rebalance should not
have happened at all.
Thanks
Zakee
> On Mar 9, 2015, at 10:28 AM, Zakee wrote:
>
>> Hmm, that sounds like a bug. Can you p
probably known this, just to double confirm.
Yes
> 2. In zookeeper path, can you verify /admin/preferred_replica_election
> does not exist?
ls /admin
[delete_topics]
ls /admin/preferred_replica_election
Node does not exist: /admin/preferred_replica_election
Thanks
Zakee
> On Mar 7, 201
I started with clean cluster and started to push data. It still does the
rebalance at random durations even though the auto.leader.relabalance is set to
false.
Thanks
Zakee
> On Mar 6, 2015, at 3:51 PM, Jiangjie Qin wrote:
>
> Yes, the rebalance should not happen in that case.
Thanks, Jiangjie, I will try with a clean cluster again.
Thanks
Zakee
> On Mar 6, 2015, at 3:51 PM, Jiangjie Qin wrote:
>
> Yes, the rebalance should not happen in that case. That is a little bit
> strange. Could you try to launch a clean Kafka cluster with
> auto.leader.el
lance happening. My understanding was the rebalance will not happen
when this is set to false.
Thanks
Zakee
> On Feb 25, 2015, at 5:17 PM, Jiangjie Qin wrote:
>
> I don’t think num.replica.fetchers will help in this case. Increasing
> number of fetcher threads will only help i
producer.purgatory.purge.interval.requests=5000
Thanks
Zakee
Skin Tightening For Men
Reduce The Look of Saggy Skin and Wrinkles, without Leaving Home
http://thirdpartyoffers.netzero.net/TGL3231/54f5083a31f938396843st04vuc
Thanks, I have added them for monitoring.
-Zakee
> On Feb 27, 2015, at 9:21 AM, Jun Rao wrote:
>
> Zakee,
>
> It would be useful to get the following.
>
> kafka.network:name=RequestQueueSize,type=RequestChannel
> kafka.network:name=RequestQueueTimeMs,request=F
Thanks,
Zakee
> On Feb 26, 2015, at 9:10 AM, Jun Rao wrote:
>
> That may be enough. What's the RequestQueueSize and RequestQueueTimeMs?
>
> Thanks,
>
> Jun
>
> On Wed, Feb 25, 2015 at 10:24 PM, Zakee wrote:
>
>> Well currently I have configured
Well currently I have configured 14 thread for both io and network. Do you
think we should consider more?
Thanks
-Zakee
On Wed, Feb 25, 2015 at 6:22 PM, Jun Rao wrote:
> Then you may want to consider increasing num.io.threads
> and num.network.threads.
>
> Thanks,
>
> Jun
Thanks, Jiangjie.
Yes, I do see under partitions usually shooting every hour. Anythings that
I could try to reduce it?
How does "num.replica.fetchers" affect the replica sync? Currently have
configured 7 each of 5 brokers.
-Zakee
On Wed, Feb 25, 2015 at 4:17 PM, Jiangjie Qin
wrote
Do you have the property auto.leader.rebalance.enable=true set in brokers?
Thanks
-Zakee
On Tue, Feb 24, 2015 at 11:47 PM, ZhuGe wrote:
> Hi all:We have a cluster of 3 brokers(id : 0,1,2). We restart(simply use
> stop.sh and start.sh in bin directory) broker 1. The broker s
Broker 2]: Fetch request
with correlation id 950084 from client ReplicaFetcherThread-1-2 on
partition [TestTopic,2] failed due to Leader not local for partition
[TestTopic,2] on broker 2 (kafka.server.ReplicaManager)
Any ideas?
-Zakee
Next
Similar pattern for that too. Mostly hovering below.
-Zakee
On Tue, Feb 24, 2015 at 2:43 PM, Jun Rao wrote:
> What about RequestHandlerAvgIdlePercent?
>
> Thanks,
>
> Jun
>
> On Mon, Feb 23, 2015 at 8:47 PM, Zakee wrote:
>
> > Hi Jun,
> >
> > With ~10
Does that count get frozen on a fixed number or any random number?
-Zakee
On Mon, Feb 23, 2015 at 9:48 AM, Stuart Reynolds
wrote:
> See SimpleConsumer. getOffsetsBefore
> and the getLastOffset example here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleCons
Hi Jun,
With ~100G of data being pushed per hour across 35 topics
(replication-factor 3), the NetworkProcessorAvgIdlePercent is mostly
showing below 0.5 sometimes when the producers send on a high rate.
Thanks
-Zakee
On Sun, Feb 22, 2015 at 10:29 PM, Jun Rao wrote:
> What kind of load do
Thanks, Ewen. I will try these.
-Zakee
On Sun, Feb 22, 2015 at 11:51 PM, Ewen Cheslack-Postava
wrote:
> If you haven't seen it yet, you probably want to look at
> http://kafka.apache.org/documentation.html#java
>
> -Ewen
>
> On Thu, Feb 19, 2015 at 10:53 AM, Zakee wrot
Jun,
I am already using the latest release 0.8.2.1.
-Zakee
On Thu, Feb 19, 2015 at 2:46 PM, Jun Rao wrote:
> Could you try the 0.8.2.1 release being voted on now? It fixes a CPU issue
> and should reduce the CPU load in network thread.
>
> Thanks,
>
> Jun
>
> On Thu,
value seems to be below 0.3 a lot of
the times, almost always if we take samples every five mins. What should be
the threshold to raise an alarm ?
What would be the impact of having this below 0.3 or even zero like most of
the times?
y having such a large heap you are taking away OS memory from
> them).
>
> -Jay
>
> On Wed, Feb 18, 2015 at 4:13 PM, Zakee wrote:
>
> > I am running a cluster of 5 brokers with 40G ms/mx for each. I found one
> of
> > the brokers is constantly using above ~90% of m
cluster? Why would
the index file sizes be so hugely different on one broker? Any ideas?
Regards
Zakee
Invest with the Trend
Exclusive Breakout Alert On Soaring Social Media Technology
http://thirdpartyoffers.netzero.net/TGL3231
66 matches
Mail list logo