It looks like replicas never catch up even when there is no load. Am I
missing something?
On Sat, Dec 10, 2016 at 8:09 PM, Mohit Anchlia
wrote:
> Does Kafka automatically replicate the under replicated partitions?
>
> I looked at these metrics through jmxterm the
might be (e.g. saturating network,
> requests processing slow due to some other resource contention, etc).
>
> -Ewen
>
> On Fri, Dec 9, 2016 at 5:20 PM, Mohit Anchlia
> wrote:
>
> > What's the best way to fix NotEnoughReplication given all the nodes are
> up
>
What's the best way to fix NotEnoughReplication given all the nodes are up
and running? Zookeeper did go down momentarily. We are on Kafka 0.10
org.apache.kafka.common.errors.NotEnoughReplicasException: Number of insync
replicas for partition [__consumer_offsets,20] is [1], below required
minimum
set.reset: earliest
>
> or otherwise made sure (KafkaConsumer.position()) that the consumer does
> not just wait for *new* messages to arrive?
>
> Harald.
>
>
>
> On 06.12.2016 20:11, Mohit Anchlia wrote:
>
>> I see this message in the logs:
>>
>> [2016-12
I see this message in the logs:
[2016-12-06 13:54:16,586] INFO [GroupCoordinator 0]: Preparing to
restabilize group DemoConsumer with old generation 3
(kafka.coordinator.GroupCoordinator)
On Tue, Dec 6, 2016 at 10:53 AM, Mohit Anchlia
wrote:
> I have a consumer polling a topic of Kafka 0
I have a consumer polling a topic of Kafka 0.10. Even though the topic has
messages the consumer poll is not fetching the message. The thread dump
reveals:
"main" #1 prio=5 os_prio=0 tid=0x7f3ba4008800 nid=0x798 runnable
[0x7f3baa6c3000]
java.lang.Thread.State: RUNNABLE
at sun.n
26 AM, Matthias J. Sax
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> It's a client issues... But CP 3.1 should be our in about 2 weeks...
> Of course, you can use Kafka 0.10.1.0 for now. It was released last
> week and does contain the fix.
>
> - -Mat
.10.1.0 which was release last week does contain the fix
> already. The fix will be in CP 3.1 coming up soon!
>
> (sorry that I did mix up versions in a previous email)
>
> - -Matthias
>
> On 10/23/16 12:10 PM, Mohit Anchlia wrote:
> > So if I get it right I will not hav
ould be release the next weeks
>
> So CP 3.2 should be there is about 4 month (Kafka follows a time base
> release cycle of 4 month and CP usually aligns with Kafka releases)
>
> - -Matthias
>
>
> On 10/20/16 5:10 PM, Mohit Anchlia wrote:
> > Any idea of when 3.2 is co
ster branch because of API changes from
> 0.10.0.x to 0.10.1 (and thus, changing CP-3.1 to 0.10.1.0 will not be
> compatible and not compile, while changing CP-3.2-SNAPSHOT to 0.10.1.0
> should work -- hopefully ;) )
>
>
> - -Matthias
>
> On 10/20/16 4:02 PM, Mohit Anchlia wrote
branch, after CP-3.1 was
> released
>
>
> - -Matthias
>
> On 10/20/16 3:48 PM, Mohit Anchlia wrote:
> > I just now cloned this repo. It seems to be using 10.1
> >
> > https://github.com/confluentinc/examples and running examples in
> > https://github.com/c
fka 0.10.0.x on Windows? If so, this is a
> known issue that is fixed in Kafka 0.10.1 that was just released today.
>
> Also: which examples are you referring to? And, to confirm: which git
> branch / Kafka version / OS in case my guess above was wrong.
>
>
> On Thursday, October
I am trying to run the examples from git. While running the wordcount
example I see this error:
Caused by: *java.lang.RuntimeException*: librocksdbjni-win64.dll was not
found inside JAR.
Am I expected to include this jar locally?
In 0.9 release it's not clear if Security features of LDAP authentication
and authorization are available? If authN and authZ are available can
somebody point me to relevant documentation that shows how to configure
Kafka to enable authN and authZ?
On the server side this is what I see:
[2015-11-20 14:45:31,849] INFO Closing socket connection to /177.40.23.2.
(kafka.network.Processor)
On Fri, Nov 20, 2015 at 11:51 AM, Mohit Anchlia
wrote:
> I am using latest stable release of Kafka and trying to post a message.
> However I see this
I am using latest stable release of Kafka and trying to post a message.
However I see this error:
Client:
Exception in thread "main" *kafka.common.FailedToSendMessageException*:
Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(
*DefaultEventHandler.scala:
Are there any command line or UI tools available to monitor kafka?
Is there a tentative date for the release of 0.9.0? I tried looking at Jira
tickets however there is no mention of a tentative date when 0.9.0 is going
to be released.
Is there a tentative release date for Kafka 0.9.0?
, Oct 24, 2015 at 11:13 PM, Guozhang Wang wrote:
> Mohit,
>
> We will update the java docs page to include more examples using the APIs
> soon, will keep you posted.
>
> Guozhang
>
> On Fri, Oct 23, 2015 at 9:30 AM, Mohit Anchlia
> wrote:
>
> > Can I get a lin
#x27;t have to handle errors that way, we are just showing that you can.
>
> On Thu, Oct 22, 2015 at 8:34 PM, Mohit Anchlia
> wrote:
> > It's in this link. Most of the examples have some kind of error handling
> >
> > http://people.apache.org/~nehanarkhede/kafka-0.9-cons
> Guozhang
>
> On Thu, Oct 22, 2015 at 5:43 PM, Mohit Anchlia
> wrote:
>
> > The examples in the javadoc seems to imply that developers need to manage
> > all of the aspects around failures. Those examples are for rewinding
> > offsets, dealing with failed portioned for
n periodically commit these offsets back to Kafka.
>
> Guozhang
>
> On Thu, Oct 22, 2015 at 10:11 AM, Mohit Anchlia
> wrote:
>
> > It looks like the new consumer API expects developers to manage the
> > failures? Or is there some other API that can abstract the failur
It looks like the new consumer API expects developers to manage the
failures? Or is there some other API that can abstract the failures,
primarily:
1) Automatically resent failed messages because of network issue or some
other issue between the broker and the consumer
2) Ability to acknowledge rec
I read through the documentation however when I try to access Java API
through the link posted on the design page I get "no page found"
http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/kafka/clients/consumer/KafkaConsumer.html
On Wed, Oct 21, 2015 at 9:59 AM, Moh
never mind, I found the documentation
On Wed, Oct 21, 2015 at 9:50 AM, Mohit Anchlia
wrote:
> Thanks. Where can I find new Java consumer API documentation with
> examples?
>
> On Tue, Oct 20, 2015 at 6:37 PM, Guozhang Wang wrote:
>
>> There are a bunch of new features ad
framework for ingress / egress of Kafka.
>
> Guozhang
>
> On Tue, Oct 20, 2015 at 4:32 PM, Mohit Anchlia
> wrote:
>
> > Thanks. Are there any other major changes in .9 release other than the
> > Consumer changes. Should I wait for .9 or go ahead and performance test
>
te yet.
>
> Guozhang
>
> On Tue, Oct 20, 2015 at 3:18 PM, Mohit Anchlia
> wrote:
>
> > Is there a wiki page where I can find all the major design changes in
> > 0.9.0?
> >
> > On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang
> wrote:
> >
> &g
Is there a wiki page where I can find all the major design changes in 0.9.0?
On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang wrote:
> It is not released yet, we are shooting for Nov. for 0.9.0.
>
> Guozhang
>
> On Mon, Oct 19, 2015 at 4:08 PM, Mohit Anchlia
> wrote:
>
>
re interested in trying it out:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
>
> Guozhang
>
> On Mon, Oct 19, 2015 at 2:54 PM, Mohit Anchlia
> wrote:
>
> > By old consumer you mean version < .8?
> >
> > Here are the links
> Are you referring to the new Java consumer or the old consumer? Or more
> specifically what examples doc are you referring to?
>
> Guozhang
>
> On Mon, Oct 19, 2015 at 10:01 AM, Mohit Anchlia
> wrote:
>
> > I see most of the consumer examples create a while/for loop an
I see most of the consumer examples create a while/for loop and then fetch
messages iteratively. Is that the only way by which clients can consumer
messages? If this is the preferred way then how do you deal with failures,
exceptions such that messages are not lost.
Also, please point me to exampl
I am seeing following exception, don't understand the issue here. Is there
a way to resolve this error?
client consumer logs:
Exception in thread "main" kafka.common.ConsumerRebalanceFailedException:
groupB_ip-10-38-19-230-1414174925481-97fa3f2a can't rebalance after 4
retries
at
kafka.c
Wed, Oct 22, 2014 at 11:41 AM, Mohit Anchlia
> wrote:
>
> > I can't find this property in server.properties file. Is that the right
> > place to set this parameter?
> > On Tue, Oct 21, 2014 at 6:27 PM, Jun Rao wrote:
> >
> > > Could you also se
ait);
> >
> > On Tue, Oct 21, 2014 at 1:21 PM, Guozhang Wang
> wrote:
> >
> > > This is a consumer config:
> > >
> > > fetch.wait.max.ms
> > >
> > > On Tue, Oct 21, 2014 at 11:39 AM, Mohit Anchlia <
> mohitanch...@gmail.com>
&
leByteChannelImpl.read(Channels.java:385)
- locked <0x9515bcb0> (a java.lang.Object)
at kafka.utils.Utils$.read(Utils.scala:375)
On Tue, Oct 21, 2014 at 2:15 PM, Mohit Anchlia
wrote:
> I set the property to 1 in the consumer code that is passed to
> "createJavaConsumerCo
fetch.wait.max.ms
>
> On Tue, Oct 21, 2014 at 11:39 AM, Mohit Anchlia
> wrote:
>
> > Is this a parameter I need to set it in kafka server or on the client
> side?
> > Also, can you help point out which one exactly is consumer max wait time
> > from this list?
> &g
nd see if that fixes the problem).
>
> The reason I suspect this problem is because the default timeout in the
> java consumer is 100ms.
>
> -Jay
>
> On Tue, Oct 21, 2014 at 11:06 AM, Mohit Anchlia
> wrote:
>
> > This is the version I am using: kafka_2.10-0.8.1.1
>
t;
> -Jay
>
> On Tue, Oct 21, 2014 at 10:50 AM, Mohit Anchlia
> wrote:
>
> > It's consistently close to 100ms which makes me believe that there are
> some
> > settings that I might have to tweak, however, I am not sure how to
> confirm
> > that assumption
It's consistently close to 100ms which makes me believe that there are some
settings that I might have to tweak, however, I am not sure how to confirm
that assumption :)
On Tue, Oct 21, 2014 at 8:53 AM, Mohit Anchlia
wrote:
> I have a java test that produces messages and then consumer c
Narkhede
wrote:
> Can you give more information about the performance test? Which test? Which
> queue? How did you measure the dequeue latency.
>
> On Mon, Oct 20, 2014 at 5:09 PM, Mohit Anchlia
> wrote:
>
> > I am running a performance test and from what I am seeing is
I am running a performance test and from what I am seeing is that messages
are taking about 100ms to pop from the queue itself and hence making the
test slow. I am looking for pointers of how I can troubleshoot this issue.
There seems to be plenty of CPU and IO available. I am running 22 producers
xecutes the send - either
> returning immediately (if async) or when it managed to contact the
> broker (if sync).
>
> Gwen
>
> On Fri, Oct 17, 2014 at 4:38 PM, Mohit Anchlia
> wrote:
> > My understanding of sync is that producer waits on .send until Kafka
> > re
en
>
> On Fri, Oct 17, 2014 at 4:13 PM, Mohit Anchlia
> wrote:
> > Still don't understand the difference. If it's not waiting for the ack
> then
> > doesn't it make async?
> > On Fri, Oct 17, 2014 at 12:55 PM, wrote:
> >
> >> Its using the
ng.
>
> —
> Sent from Mailbox
>
> On Fri, Oct 17, 2014 at 3:15 PM, Mohit Anchlia
> wrote:
>
> > Little confused :) From one of the examples I am using property
> > request.required.acks=0,
> > I thought this sets the producer to be async?
> > On Fr
On Fri, Oct 17, 2014 at 2:57 PM, Mohit Anchlia
> wrote:
> > Thanks! How can I tell if I am using async producer? I thought all the
> > sends are async in nature
> > On Fri, Oct 17, 2014 at 11:44 AM, Gwen Shapira
> > wrote:
> >
> >> If you h
usual suspect! perhaps look in the FAQ for tips with that issue)
>
> Gwen
>
> On Fri, Oct 17, 2014 at 12:56 PM, Mohit Anchlia
> wrote:
> > Is Kafka supposed to throw exception if topic doesn't exist? It appears
> > that there is no exception thrown even though no messages are delivered
> and
> > there are errors logged in Kafka logs.
>
Is Kafka supposed to throw exception if topic doesn't exist? It appears
that there is no exception thrown even though no messages are delivered and
there are errors logged in Kafka logs.
I added the following dependency in my pom file, however after I add the
dependency I get errors:
org.apache.kafka
kafka_2.10
0.8.1.1
Errors:
ArtifactTransferException: Failure to transfer
com.sun.jdmk:jmxtools:jar:1.2.1 from
Missing artifact com.sun.jmx:jmxri:jar:1.2.1
Could somebody help throw some light on why my commands might be hanging?
What's the easiest way to monitor and debug this problem?
On Mon, Oct 13, 2014 at 5:07 PM, Mohit Anchlia
wrote:
> I am new to Kafka and I just installed Kafka. I am getting the following
> error. Zookeeper
on, Oct 13, 2014 at 5:29 PM, Jun Rao wrote:
> Is that error transient or persistent?
>
> Thanks,
>
> Jun
>
> On Mon, Oct 13, 2014 at 5:07 PM, Mohit Anchlia
> wrote:
>
> > I am new to Kafka and I just installed Kafka. I am getting the following
> > error. Zooke
I am new to Kafka and I just installed Kafka. I am getting the following
error. Zookeeper seems to be running.
[ec2-user@ip-10-231-154-117 kafka_2.10-0.8.1.1]$
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
erator will take care of it for you. It uses long polling
>to listen for messages on the broker and blocks those fetch requests
> until there is data available.
>
> hope that helps.
>
> -Harsha
> On Fri, Oct 10, 2014, at 12:32 PM, Mohit Anchlia wrote:
> > I
I am new to Kafka and very little familiarity with Scala. I see that the
build requires "sbt" tool, but do I also need to install Scala separately?
Is there a detailed documentation on software requirements on the broker
machine.
I am also looking for 3 different types of java examples 1) Follow
r
54 matches
Mail list logo