the overall cluster perf based on
the retention time. Since you already reduced retention to 1hour you should be
good.
Thanks
Zakee
> On Jun 28, 2018, at 2:36 PM, Vignesh wrote:
>
> Hello kafka users,
>
> How to recover a kafka broker from disk full ?
>
> I updated the
Did you try with --new-consumer ?
-Zakee
> On Mar 20, 2018, at 10:26 AM, Anand, Uttam <uttam.an...@bnsf.com> wrote:
>
> I am facing an issue while consuming message using the bootstrap-server i.e.
> Kafka server. Any idea why is it not able to consume messages wi
What version of kafka you are on? There are issues with delete topic until I
guess kafka 0.10.1.0
You may be hitting this issue… https://issues.apache.org/jira/browse/KAFKA-2231
<https://issues.apache.org/jira/browse/KAFKA-2231>
Thanks
Zakee
> On Apr 6, 2017, at 11:20 AM, Adria
Hi Nico,
They can be defined at both cluster and topics levels. Refer
https://kafka.apache.org/documentation/#topic-config for topic-level overrides
available.
Cheers!
-Z
> On Mar 8, 2017, at 12:41 PM, Nicolas MOTTE wrote:
>
> Hi everyone,
>
> Is there any
+1
-Zakee
> On Feb 14, 2017, at 1:56 PM, Jay Kreps <j...@confluent.io> wrote:
>
> +1
>
> Nice improvement.
>
> -Jay
>
> On Tue, Feb 14, 2017 at 1:22 PM, Steven Schlansker <
> sschlans...@opentable.com> wrote:
>
>> Hi, it looks like
Brokers failed repeatedly leaving behind page-cache in memory, which caused
broker restarts to fail with OOM every time.
After manually cleaning up page-cache, I was able to restart the broker.
However, still wondering what could have caused this state in the first place.
Any ideas?
-Zakee
(FileChannelImpl.java:904)
... 28 more
Thanks
-Zakee
No. Newer client API won’t work with older version of broker. Generally, older
client should be able to work with newer broker version.
-Zakee
> On Nov 18, 2016, at 11:58 AM, Weian Deng <wd...@walmartlabs.com> wrote:
>
> More specifically, Is Kafka Java client 0.9.0.1 compat
(part of) data and
maintains its replica state on different nodes.
Same thing on the consumer end, you can start as many parallel processes (or
threads) as there are partitions of a topic, to achieve better throughputs.
-Zakee
> On Nov 16, 2016, at 9:03 AM, Tauzell, Dave <dav
Yes, offsets are unique per partition.
> I've observed that for example I had offset values equal to zero more times
> then there is the number of Kafka partitions.
Can you elaborate a little more what you observed?
-Zakee
> On Nov 14, 2016, at 10:06 AM, Dominik Safaric <
Are these the only logs you see or there are bit more log events before this
that might be relevant?
-Zakee
> On Nov 12, 2016, at 7:00 PM, Vinay Gulani <vinay.gul...@gmail.com> wrote:
>
> Hi,
>
> I am getting below warning message:
>
> WARN kafka.server.Ka
-1
Thanks.
> On Oct 25, 2016, at 2:16 PM, Harsha Chintalapani wrote:
>
> Hi All,
> We are proposing to have a REST Server as part of Apache Kafka
> to provide producer/consumer/admin APIs. We Strongly believe having
> REST server functionality with Apache Kafka will
These errors are possibly caused by delete topic (esp. using v0.8.2.x), refer
below issues.
https://issues.apache.org/jira/browse/KAFKA-2231
https://issues.apache.org/jira/browse/KAFKA-1194
The replica fetchers still querying their leaders for partitions of deleted
topic. May need a restart of
Typically "preferred leader election” would fail if/when one or more brokers
still did not come back online after being down for some time. Is that your
scenario?
-Zakee
> On Aug 11, 2016, at 12:42 AM, Sudev A C <sudev...@goibibo.com> wrote:
>
> Hi,
>
> With *au
Increasing number of retries and/or retry.backoff.ms will help reduce the data
loss. Figure out how long NLFPE occurs (this happens only as long as metadata
is obsolete), and configure below props accordingly.
message.send.max.retries=3 (default)
retry.backoff.ms=100 (default)
> On Aug 12,
.
-Zakee
On Jun 20, 2015, at 4:23 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
It seems that your log.index.size.max.bytes was 1K and probably was too
small. This will cause your index file to reach its upper limit before
fully index the log segment.
Jiangjie (Becket) Qin
On 6/18/15, 4
$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Thanks
Zakee
Old School Yearbook Pics
View Class Yearbooks
With reties 1 you still see the 3 secs delay? The idea is, you can change these
property to reduce the time to throw exception to 1 secs or below. Does that
help?
Thanks
Zakee
On Apr 28, 2015, at 10:29 PM, Madhukar Bharti bhartimadhu...@gmail.com
wrote:
Hi Zakee
What values do you have for below properties? Or are these set to defaults?
message.send.max.retries
retry.backoff.ms
topic.metadata.refresh.interval.ms
Thanks
Zakee
On Apr 23, 2015, at 11:48 PM, Madhukar Bharti bhartimadhu...@gmail.com
wrote:
Hi All,
Once gone through code found
/jira/browse/KAFKA-2038
Thanks
Zakee
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School Year. Look Now!
http://thirdpartyoffers.netzero.net/TGL3231/550db9cebda5839ce0dcbst04vuc
Hi Mayuresh,
Here are the logs.
Broker-4
[2015-03-13 17:49:40,514] INFO Partition [Topic22kv,5] on broker 4: Shrinking
ISR for partition [Topic22kv,5] from 2,4,3 to 2,4 (kafka.cluster.Partition)
[2015-03-13 17:49:40,514] INFO Partition [Topic22kv,5] on broker 4: Shrinking
ISR for partition
Hi Mayuresh,
Here are the logs.
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School Year. Look Now!
http://thirdpartyoffers.netzero.net/TGL3231/5507ca8137dc94a805e6bst01vucBroker-4
[2015-03-13 17:49:40,514]
What version are you running ?
Version 0.8.2.0
Your case is 2). But the only thing weird is your replica (broker 3) is
requesting for offset which is greater than the leaders log end offset.
So what could be the cause?
Thanks
Zakee
On Mar 17, 2015, at 11:45 AM, Mayuresh Gharat
Hi Mayuresh,
The logs are already attached and are in reverse order starting backwards from
[2015-03-14 07:46:52,517] to the time when brokers were started.
Thanks
Zakee
On Mar 17, 2015, at 12:07 AM, Mayuresh Gharat gharatmayures...@gmail.com
wrote:
Hi Zakee,
Thanks for the logs
)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Thanks
Zakee
What's your flood risk
)
at kafka.network.RequestChannel$Request.init(RequestChannel.scala:50)
at kafka.network.Processor.read(SocketServer.scala:450)
at kafka.network.Processor.run(SocketServer.scala:340)
at java.lang.Thread.run(Thread.java:662)
Thanks
Zakee
Ah, you are right.
Thanks
Zakee
On Mar 16, 2015, at 2:05 PM, Gwen Shapira gshap...@cloudera.com wrote:
Your kafka log directory (in config file, under log.dir) contains
directories that are not KafkaTopics. Possibly hidden directory.
Check what ls -la shows in that directory.
Gwen
(kafka.server.ReplicaFetcherThread)
Thanks
Zakee
On Mar 9, 2015, at 12:18 PM, Zakee kzak...@netzero.net wrote:
No broker restarts.
Created a kafka issue: https://issues.apache.org/jira/browse/KAFKA-2011
https://issues.apache.org/jira/browse/KAFKA-2011
Logs for rebalance:
[2015-03-07 16:52:48,969] INFO
Sorry, but still confused. Maximum number of threads (fetchers) to fetch from
a Leader or maximum number of threads within a follower broker?
Thanks for clarifying,
-Zakee
On Mar 12, 2015, at 11:11 PM, tao xiao xiaotao...@gmail.com wrote:
The number of fetchers is configurable via
Hi Mayuresh,
I have currently set this property to 4 and I see from the logs that it starts
12 threads on each broker. I will try increasing it further.
Thanks
Zakee
On Mar 13, 2015, at 11:53 AM, Mayuresh Gharat gharatmayures...@gmail.com
wrote:
You might want to increase the number
)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2015-03-13 15:27:31,831] INFO [Kafka Server 5], shutting down
(kafka.server.KafkaServer)
Thanks
Zakee
Fast, Secure, NetZero 4G
I have 35 topics spread with total 398 partitions (2 of them are supposed to be
very high volume and so allocated 28 partitions to them, others vary between 5
to 14).
Thanks
Zakee
On Mar 13, 2015, at 3:25 PM, Joel Koshy jjkosh...@gmail.com wrote:
I think what people have observed
Thanks, Mayuresh. I did the same and it fixed the issue.
Thanks
Zakee
On Mar 13, 2015, at 3:56 PM, Mayuresh Gharat gharatmayures...@gmail.com
wrote:
The index files work in the following way :
Its a mapping from logical offsets to a particular file location within the
log file segment
and leader
replicas, should be less than replica.lag.max.messages (currently set to 5000)
Thanks
Zakee
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School Year. Look Now!
http://thirdpartyoffers.netzero.net
Zakee
On Mar 12, 2015, at 11:15 AM, James Cheng jch...@tivo.com wrote:
Ah, I understand now. I didn't realize that there was one fetcher thread per
broker.
Thanks Tao Guozhang!
-James
On Mar 11, 2015, at 5:00 PM, tao xiao xiaotao...@gmail.com
mailto:xiaotao...@gmail.com wrote
(FetchResponse.scala:231)
at kafka.network.Processor.write(SocketServer.scala:472)
at kafka.network.Processor.run(SocketServer.scala:342)
at java.lang.Thread.run(Thread.java:662)
Thanks
Zakee
How Old Men Tighten
this, just to double confirm.
Yes
2. In zookeeper path, can you verify /admin/preferred_replica_election
does not exist?
ls /admin
[delete_topics]
ls /admin/preferred_replica_election
Node does not exist: /admin/preferred_replica_election
Thanks
Zakee
On Mar 7, 2015, at 10:49 PM, Jiangjie Qin j
[ReplicaFetcherThread-1-5], Error for
partition [Topic-22,9] to broker 5:class
kafka.common.NotLeaderForPartitionException
(kafka.server.ReplicaFetcherThread)
Thanks
Zakee
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School
Correction: Actually the rebalance happened quite until 24 hours after the
start, and thats where below errors were found. Ideally rebalance should not
have happened at all.
Thanks
Zakee
On Mar 9, 2015, at 10:28 AM, Zakee kzak...@netzero.net wrote:
Hmm, that sounds like a bug. Can you
[ReplicaFetcherThread-1-5], Error for
partition [Topic-22,9] to broker 5:class
kafka.common.NotLeaderForPartitionException
(kafka.server.ReplicaFetcherThread)
Could you paste the related logs in controller.log?
What specifically should I search for in the logs?
Thanks,
Zakee
On Mar 9
[ReplicaFetcherThread-1-5], Error for
partition [Topic-22,9] to broker 5:class
kafka.common.NotLeaderForPartitionException
(kafka.server.ReplicaFetcherThread)
Could you paste the
related logs in controller.log?
What specifically should I search for in the logs?
Thanks,
Kazim Zakee
I started with clean cluster and started to push data. It still does the
rebalance at random durations even though the auto.leader.relabalance is set to
false.
Thanks
Zakee
On Mar 6, 2015, at 3:51 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
Yes, the rebalance should not happen
happening. My understanding was the rebalance will not happen
when this is set to false.
Thanks
Zakee
On Feb 25, 2015, at 5:17 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
I don’t think num.replica.fetchers will help in this case. Increasing
number of fetcher threads will only help
Thanks, Jiangjie, I will try with a clean cluster again.
Thanks
Zakee
On Mar 6, 2015, at 3:51 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
Yes, the rebalance should not happen in that case. That is a little bit
strange. Could you try to launch a clean Kafka cluster
producer.purgatory.purge.interval.requests=5000
Thanks
Zakee
Skin Tightening For Men
Reduce The Look of Saggy Skin and Wrinkles, without Leaving Home
http://thirdpartyoffers.netzero.net/TGL3231/54f5083a31f938396843st04vuc
Thanks, I have added them for monitoring.
-Zakee
On Feb 27, 2015, at 9:21 AM, Jun Rao j...@confluent.io wrote:
Zakee,
It would be useful to get the following.
kafka.network:name=RequestQueueSize,type=RequestChannel
kafka.network:name=RequestQueueTimeMs,request=Fetch,type
Thanks,
Zakee
On Feb 26, 2015, at 9:10 AM, Jun Rao j...@confluent.io wrote:
That may be enough. What's the RequestQueueSize and RequestQueueTimeMs?
Thanks,
Jun
On Wed, Feb 25, 2015 at 10:24 PM, Zakee kzak...@netzero.net wrote:
Well currently I have configured 14 thread for both io
on Broker 2]: Fetch request
with correlation id 950084 from client ReplicaFetcherThread-1-2 on
partition [TestTopic,2] failed due to Leader not local for partition
[TestTopic,2] on broker 2 (kafka.server.ReplicaManager)
Any ideas?
-Zakee
Next
Do you have the property auto.leader.rebalance.enable=true set in brokers?
Thanks
-Zakee
On Tue, Feb 24, 2015 at 11:47 PM, ZhuGe t...@outlook.com wrote:
Hi all:We have a cluster of 3 brokers(id : 0,1,2). We restart(simply use
stop.sh and start.sh in bin directory) broker 1. The broker started
Thanks, Jiangjie.
Yes, I do see under partitions usually shooting every hour. Anythings that
I could try to reduce it?
How does num.replica.fetchers affect the replica sync? Currently have
configured 7 each of 5 brokers.
-Zakee
On Wed, Feb 25, 2015 at 4:17 PM, Jiangjie Qin j
Well currently I have configured 14 thread for both io and network. Do you
think we should consider more?
Thanks
-Zakee
On Wed, Feb 25, 2015 at 6:22 PM, Jun Rao j...@confluent.io wrote:
Then you may want to consider increasing num.io.threads
and num.network.threads.
Thanks,
Jun
On Tue
Does that count get frozen on a fixed number or any random number?
-Zakee
On Mon, Feb 23, 2015 at 9:48 AM, Stuart Reynolds s...@stureynolds.com
wrote:
See SimpleConsumer. getOffsetsBefore
and the getLastOffset example here:
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0
Similar pattern for that too. Mostly hovering below.
-Zakee
On Tue, Feb 24, 2015 at 2:43 PM, Jun Rao j...@confluent.io wrote:
What about RequestHandlerAvgIdlePercent?
Thanks,
Jun
On Mon, Feb 23, 2015 at 8:47 PM, Zakee kzak...@netzero.net wrote:
Hi Jun,
With ~100G of data being
Thanks, Ewen. I will try these.
-Zakee
On Sun, Feb 22, 2015 at 11:51 PM, Ewen Cheslack-Postava e...@confluent.io
wrote:
If you haven't seen it yet, you probably want to look at
http://kafka.apache.org/documentation.html#java
-Ewen
On Thu, Feb 19, 2015 at 10:53 AM, Zakee kzak
Hi Jun,
With ~100G of data being pushed per hour across 35 topics
(replication-factor 3), the NetworkProcessorAvgIdlePercent is mostly
showing below 0.5 sometimes when the producers send on a high rate.
Thanks
-Zakee
On Sun, Feb 22, 2015 at 10:29 PM, Jun Rao j...@confluent.io wrote:
What kind
by having such a large heap you are taking away OS memory from
them).
-Jay
On Wed, Feb 18, 2015 at 4:13 PM, Zakee kzak...@netzero.net wrote:
I am running a cluster of 5 brokers with 40G ms/mx for each. I found one
of
the brokers is constantly using above ~90% of memory for jvm.heapUsage. I
Jun,
I am already using the latest release 0.8.2.1.
-Zakee
On Thu, Feb 19, 2015 at 2:46 PM, Jun Rao j...@confluent.io wrote:
Could you try the 0.8.2.1 release being voted on now? It fixes a CPU issue
and should reduce the CPU load in network thread.
Thanks,
Jun
On Thu, Feb 19, 2015
in the cluster? Why would
the index file sizes be so hugely different on one broker? Any ideas?
Regards
Zakee
Invest with the Trend
Exclusive Breakout Alert On Soaring Social Media Technology
http://thirdpartyoffers.netzero.net/TGL3231
58 matches
Mail list logo