Leader is a replica
On Tue, 18 Sep 2018 at 22:52 jorg.heym...@gmail.com
wrote:
> Hi,
>
> Testing out some kafka consistency guarantees I have following basic
> producer config:
>
> ProducerConfig.ACKS_CONFIG=all
> ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG=true
>
Hi team,
I use Kafka 0.10.0.0 and recently encountered an issue where the hard disk
mounted in one of the nodes experienced performance degradation that caused
the node being unstable. But controller didn't remove the node from ISR of
partitions for which the node is a leader. I wonder if anyway
elivery pattern. The behavior is sort of weird
> and is not self-explaining. Wondering whether it has anything to do with
> the fact that number of consumers is too large? In our example, we have
> around 100 consumer connections per broker.
> >
> > Regards,
> > Jeff
> >
Hi,
any pointer will be highly appreciated
On Thu, 30 Nov 2017 at 14:56 tao xiao <xiaotao...@gmail.com> wrote:
> Hi There,
>
>
>
> We are running into a weird situation when using Mirrormaker to replicate
> messages between Kafka clusters across datacenter and reach yo
Hi There,
We are running into a weird situation when using Mirrormaker to replicate
messages between Kafka clusters across datacenter and reach you for help in
case you also encountered this kind of problem before or have some insights
in this kind of issue.
Here is the scenario. We have
Do you have unclean leader election turned on? If killing 100 is the only
way to reproduce the problem, it is possible with unclean leader election
turned on that leadership was transferred to out of ISR follower which may
not have the latest high watermark
On Sat, Oct 7, 2017 at 3:51 AM Dmitriy
you need to use org.apache.kafka.common.serialization.StringSerializer as
your ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG
On Wed, 5 Jul 2017 at 19:18 罗 辉 wrote:
> hi guys:
>
> I got an exception which i searched searchhadoop.com and the archive as
> well and got no
Hi team,
As per my experimentation mirror maker doesn't compress messages and send
to target broker if it is not configured to do so even the messages in
source broker are compressed. I understand the current implementation of
mirror maker has no visibility to what compression codec the source
Hi team,
I know that Kafka deletes offset for a consumer group after a period of
time (configured by offsets.retention.minutes) if the consumer group is
inactive for this amount of time. I want to understand the definition of
"inactive". I came across this post[1] and it suggests that no offset
You may run into this bug https://issues.apache.org/jira/browse/KAFKA-4614
On Thu, 12 Jan 2017 at 23:38 Stephen Powis wrote:
> Per my email to the list in Sept, when I reviewed GC logs then, I didn't
> see anything out of the ordinary. (
>
>
The default partition assignor is range assignor which assigns works on a
per-topic basis. If you topics with one partition only they will be
assigned to the same consumer. You can change the assignor to
org.apache.kafka.clients.consumer.RoundRobinAssignor
On Thu, 12 Jan 2017 at 22:33 Tobias
I doubt the LB solution will work for Kafka. Client needs to connect to the
leader of a partition to produce/consume messages. If we put a LB in front
of all brokers which means all brokers share the same LB how does the LB
figure out the leader?
On Mon, Nov 21, 2016 at 10:26 PM Martin Gainty
maker first time, unless
produce a new message to this topic to trigger mirror. Thanks!
Best Regards
Johnny
-Original Message-
From: tao xiao [mailto:xiaotao...@gmail.com]
Sent: 2016年10月24日 17:10
To: users@kafka.apache.org
Subject: Re: Mirror multi-embedded consumer's configuration
Yo
You need to set auto.offset.reset=smallest to mirror data from beginning
On Mon, 24 Oct 2016 at 17:07 ZHU Hua B wrote:
> Hi,
>
>
> Thanks for your info!
>
>
> Before I launch mirror maker first time, there is a topic include some
> messages, which have been
Hi team,
I was seeing an issue where a mirror maker attempted to commit an offset
for a partition that was ahead of log end offset and in the mean time the
leader of the partition was being restarted. In theory only committed
messages can be consumed by consumer which means the messages received
Hi,
As per the description in KIP-32 the timestamp of Kafka message is
unchanged mirrored from one cluster to another if createTime is used. But I
tested with mirror maker in Kafka-0.10 this doesn't seem the case. The
timestamp of the same message is different in source and target. I checked
the
I noticed the same issue too with 0.9.
On Wed, 25 May 2016 at 09:49 Andrew Otto wrote:
> “We use the default log retention of 7 *days*" :)*
>
> On Wed, May 25, 2016 at 12:34 PM, Andrew Otto wrote:
>
> > Hiya,
> >
> > We’ve recently upgraded to 0.9. In
I am pretty sure consumer-group.sh uses tools-log4j.properties
On Tue, 24 May 2016 at 17:59 allen chan
wrote:
> Maybe i am doing this wrong
>
> [ac...@ekk001.scl ~]$ cat
> /opt/kafka/kafka_2.11-0.10.0.0/config/log4j.properties
> ..
> log4j.rootLogger=DEBUG,
> behavior is fine. For both of theconsumers put the
> ConsumerRebalanceListener.
> I look forward for your results.
> Florin
>
> On Tue, May 10, 2016 at 9:51 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Hi team,
> >
> > I want to know what would happ
Hi team,
I want to know what would happen if the consumer group rebalance takes long
time like longer than the session timeout?
For example I have two consumers A and B using the same group id. For some
reasons during rebalance consumer A takes long time to finish
onPartitionsRevoked what would
that I have if I
> need this fix. How can I get patch for this on 0.8.2.1?
>
> Sent from my iPhone
>
> > On May 6, 2016, at 12:06 AM, tao xiao <xiaotao...@gmail.com> wrote:
> >
> > It said this is a duplication. This is the
> > https://issues.apache.o
It said this is a duplication. This is the
https://issues.apache.org/jira/browse/KAFKA-2657 that KAKFA-3112 duplicates
to.
On Thu, 5 May 2016 at 22:13 Raj Tanneru wrote:
>
> Hi All,
> Does anyone know if KAFKA-3112 is merged to 0.9.0.1? Is there a place to
> check which
You can use the built-in mirror maker to mirror data from one Kafka to the
other. http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
On Thu, 5 May 2016 at 10:47 Dean Arnold wrote:
> I'm developing a Streams plugin for Kafka 0.10, to be run in a dev sandbox,
The time is controlled by metadata.max.age.ms
On Wed, Apr 27, 2016 at 1:19 AM Luciano Afranllie
wrote:
> Hi
>
> I am doing some tests to understand how kafka behaves when adding
> partitions to a topic while producing and consuming.
>
> My test is like this
>
> I
My understanding is that offset topic is a compact topic which should never
be deleted but compacted. Is this true? If this is the case what
does offsets.retention.minutes here really mean?
On Tue, 12 Apr 2016 at 20:15 Sean Morris (semorris)
wrote:
> I have been seeing the
Hi team,
We have a cluster of 40 nodes in production. We observed that one of the
nodes ran into a situation where it constantly shrunk and expanded ISR for
more than 10 hours and unable to recover until the broker was bounced.
Here are the snippet of server.log from the broker and controller
You can look at this metric
buffer-available-bytes The total amount of buffer memory that is not being
used (either unallocated or in the free list).
kafka.producer:type=producer-metrics,client-id=([-.\w]+)
On Sat, 9 Apr 2016 at 00:53 Paolo Patierno wrote:
> Hi all,
> is
oll.records to limit the amount of data returned from each call
> to poll() so that you can better predict processing time.
>
> -Jason
>
>
> On Mon, Mar 14, 2016 at 7:04 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Hi team,
> >
> > I have a
Hi team,
I have about 10 consumers using the same consumer group connecting to
Kafka. Occasional I can see UNKNOWN_MEMBER_ID assigned to some of the
consumers. I want to under what situation this would happen?
I use Kafka version 0.9.0.1
You need to change group.max.session.timeout.ms in broker to be larger than
what you have in consumer.
On Fri, 11 Mar 2016 at 00:24 Michael Freeman wrote:
> Hi,
> I'm trying to set the following on a 0.9.0.1 consumer.
>
> session.timeout.ms=12
>
Hi team,
I found that GetOffsetShell doesn't work with SASL enabled Kafka. I believe
this is due to old producer being used in GetOffsetShell. I want to if
there is any alternative to provide the same information with secure Kafka
Kafka version 0.9.0.1
Exception
% bin/kafka-run-class.sh
t; > check_command check_kafka_uncleanleader!1!10
> > check_interval 15
> > retry_interval 5
> > }
> > define service {
> > hostgroup_name KafkaBroker
> > use generic-service
> > service_description Kafka Under Replicated Partitions
> > che
; Jens
>
> On Fri, Feb 26, 2016 at 12:38 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Hi team,
> >
> > What is the best way to verify a specific Kafka node functions properly?
> > Telnet the port is one of the approach but I don't think it tells me
> >
Hi team,
What is the best way to verify a specific Kafka node functions properly?
Telnet the port is one of the approach but I don't think it tells me
whether or not the broker can still receive/send traffics. I am thinking to
ask for metadata from the broker using consumer.partitionsFor. If it
One more additional note to new consumer. The new topic will not be picked
up by new consumer immediately. It takes as long as metadata.max.age.ms to
refresh metadata and pick up topics that match the pattern at the time of
check.
On Thu, 25 Feb 2016 at 08:48 Jason Gustafson
The default value is false.
https://github.com/apache/kafka/blob/d5b43b19bb06e9cdc606312c8bcf87ed267daf44/clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java#L232
On Tue, 23 Feb 2016 at 21:14 Franco Giacosa wrote:
> Hi Guys,
>
> I was going over the
root/consumer? I'm a
> newbie of zk and kafka.
>
> On Fri, Feb 19, 2016 at 4:11 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > I'd assume the offset is managed in zk. then kafka-run-class.sh
> > kafka.admin.ConsumerGroupCommand --list --zookeeper localhost:2181 should
&
hee...@gmail.com> wrote:
> I wrote the consumer by myself in golang using Shopify's sarama lib updated
> of master branch.
>
> On Fri, Feb 19, 2016 at 4:03 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > which version of consumer do you use?
> >
> >
>
which version of consumer do you use?
On Fri, 19 Feb 2016 at 15:26 Amoxicillin <shee...@gmail.com> wrote:
> I tried as you suggested, but still no output of any group info.
>
> On Fri, Feb 19, 2016 at 2:45 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > That is
hee...@gmail.com> wrote:
> How to confirm the consumer groups are alive? I have one consumer in the
> group running at the same time, and could get messages correctly.
>
> On Fri, Feb 19, 2016 at 1:40 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > when using ConsumerG
when using ConsumerGroupCommand you need to make sure your consumer groups
are alive. It only queries offsets for consumer groups that are currently
connecting to brokers
On Fri, 19 Feb 2016 at 13:35 Amoxicillin wrote:
> Hi,
>
> I use kafka.tools.ConsumerOffsetChecker to
You can follow the instructions here
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
On Mon, 15 Feb 2016 at 03:32 Nikhil Bhaware
wrote:
> Hi,
> I have 6 node kafka cluster on which i created topic with 3
>
I think you can try with
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java
On Fri, 5 Feb 2016 at 07:01 wrote:
> Is there a way to detect the broker version (even at a high level 0.8 vs
> 0.9) using the
Hi team,
I want to understanding the meaning of request.timeout.ms that is used in
producer. As per the doc this property is used to expire records that have
been waiting for response from server for more than request.timeout.ms
which also means the records have been sitting in InFlightRequests
The new consumer only supports offset stores in Kafka
On Wed, 27 Jan 2016 at 05:26 wrote:
> Does the new KafkaConsumer support storing offsets in Zookeeper or only in
> Kafka? By looking at the source code I could not find any support for
> Zookeeper, but wanted to confirm
>
> On Tue, Jan 19, 2016 at 7:56 PM, Connie Yang <cybercon...@gmail.com>
> wrote:
>
> > @Ismael, what's the status of the SASL/PLAIN PR,
> > https://github.com/apache/kafka/pull/341?
> >
> >
> >
> > On Tue, Jan 19, 2016 at 6:25 PM, tao xiao <xiao
Thank you.
On Fri, 22 Jan 2016 at 08:39 Guozhang Wang <wangg...@gmail.com> wrote:
> Done.
>
> On Thu, Jan 21, 2016 at 12:38 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Hi Guozhang,
> >
> > Thanks for that.
> >
> > Can you please
Hi team,
In the Kafka protocol wiki it states that version 0 is the only supported
version in all APIs. I want to know if this still remains true? if not
which APIs are now using version 1?
It is possible that you closed the producer before the messages accumulated
in batch had been sent out.
You can modify your producer as below to make it a sync call and test again.
producer.send(new ProducerRecord("test",
0, Integer.toString(i), Integer.toString(i))).get();
On
Hi Ismael,
BTW looks like I don't have the permission to add a KIP in Kafka space. Can
you please grant me the permission?
On Wed, 20 Jan 2016 at 09:40 tao xiao <xiaotao...@gmail.com> wrote:
> Hi Ismael,
>
> Thank you for your reply. I am happy to have a writeup on this.
&g
e changed, some of the changes are in
> the following PR:
>
> https://github.com/apache/kafka/pull/341
>
> As per the discussion in the PR above, a KIP is also required.
>
> Ismael
>
> On Wed, Jan 20, 2016 at 1:48 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
> &
Hi Kafka team,
I want to know if I can plug-in my own security protocol to Kafka to
implement project specific authentication mechanism. The current supported
authentication protocols, SASL/GSSAPI and SSL, are not supported in my
company and we have own security protocol to do authentication.
Is
Hi,
I found that the latest mirror maker with new consumer enabled was unable
to commit offset in time when mirroring a topic with very infrequent
messages.
I have a topic with a few of messages produced every half hour. I setup
mirror maker to mirror this topic with default config. I observed
Hi team,
I have a scenario where I want to write new offset for a list of topics on
demand. The list of topics is unknown until runtime and the interval
between each commit is undetermined. what would be the best way to do so?
One way I can think of is to create a new consumer and call
ematext.com/> | Contact
> <http://sematext.com/about/contact.html>
>
> On Mon, Jan 4, 2016 at 12:18 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Hi team,
> >
> > I have a scenario where I want to write new offset for a list of topics
> on
> > d
You can bump the log level to warn for a particular class
log4j.logger.kafka.network.Processor=WARN
On Tue, 5 Jan 2016 at 08:33 Dillian Murphey wrote:
> Constant spam of this INFO on my log.
>
> [2016-01-05 00:31:15,887] INFO Closing socket connection to /10.9.255.67.
tralized Log Management
> > Solr & Elasticsearch Support
> > Sematext <http://sematext.com/> | Contact
> > <http://sematext.com/about/contact.html>
> >
> > On Mon, Jan 4, 2016 at 12:18 PM, tao xiao <xiaotao...@gmail.com> wrote:
> >
> > >
to the
> consumer process.
>
> -Jason
>
> On Mon, Jan 4, 2016 at 6:20 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Jason,
> >
> > It normally takes a couple of seconds sometimes it takes longer to join a
> > group if the consumer didn'
Hi team,
I found that the producer metric compression-rate-avg always returns 0 even
with compression.type set to snappy. I drilled down the code and discovered
that the position of bytebuffer in
org.apache.kafka.common.record.Compressor is reset to 0 by
RecordAccumulator.drain() before calling
created https://issues.apache.org/jira/browse/KAFKA-2993. I will submit a
patch for this
On Wed, 16 Dec 2015 at 00:53 Ismael Juma <ism...@juma.me.uk> wrote:
> Yes, please.
>
> Ismael
> On 15 Dec 2015 11:52, "tao xiao" <xiaotao...@gmail.com> wrote:
>
> >
rst 10 polls?
> >
> > Thanks,
> > Jason
> >
> > On Tue, Dec 1, 2015 at 7:43 PM, tao xiao <xiaotao...@gmail.com> wrote:
> >
> > > Hi Jason,
> > >
> > > You are correct. I initially produced 1 messages in Kafka before I
> > >
ybe you could run a test with the
> > org.apache.kafka.tools.ProducerPerformance class to see if it makes a
> > difference?
> >
> > On Tue, Dec 1, 2015 at 11:35 AM tao xiao <xiaotao...@gmail.com> wrote:
> >
> > > Gerard,
> > >
> > >
Hi team,
I am using the new consumer with broker version 0.9.0. I notice that
poll(time) occasionally returns 0 message even though I have enough
messages in broker. The rate of returning 0 message is quite high like 4
out of 5 polls return 0 message. It doesn't help by increasing the poll
e consumer(s) keep polling I saw little difference. I did
> see for example that when producing only 1 message a second, still it
> sometimes wait to get three messages. So I also would like to know if there
> is a faster way.
>
> On Tue, Dec 1, 2015 at 10:35 AM tao xiao <xiaotao
Hi team,
In 0.9.0.0 a new tool kafka-consumer-groups.sh is introduced to show offset
and lag for a particular consumer group prefer over the old
one kafka-consumer-offset-checker.sh. However I found that the new tool
only works for consumer groups that are currently active but not for
consumer
The new consumer only works with the broker in trunk now which will soon
become 0.9
On Sun, 1 Nov 2015 at 20:54 Alexey Pirogov wrote:
> Hi, I'm facing an issue with New Consumer. I'm getting
> GroupCoordinatorNotAvailableException during "poll" method:
> 12:32:50,942
Hi,
I am starting a new project that requires heavy use on Kafka consumer. I
did a quick look at the new Kafka consumer and found it provides some of
the features we definitely need. But as it is annotated as unable is it
safe to rely on it or it will still be evolving dramatically in coming
0.8.2.1 already supports Kafka offset storage. You can
set offsets.storage=kafka in consumer properties and high level API is able
to pick it up and commit offsets to Kafka
Here is the code reference where kafka offset logic kicks in
Joris,
You checkout Burrow https://github.com/linkedin/Burrow which gives you
monitoring and alerting capability for offset stored in Kafka
On Wed, 16 Sep 2015 at 23:05 Joris Peeters
wrote:
> I intend to write some bespoke monitoring for our internal kafka system.
ion?
> afaik there's no such notification mechanism in the High level consumer.
>
>
>
> On Thu, Sep 10, 2015 at 8:43 AM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > You can create message streams using regex that includes all topics. The
> > beauty of regex is
You can create message streams using regex that includes all topics. The
beauty of regex is that any new topic created will be automatically
consumed as long as the name of the topic matches the regex
You check the method createMessageStreamsByFilter in high level API
On Thu, Sep 10, 2015 at
Here is a good doc to describe how to choose the right number of partitions
http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/
On Fri, Sep 4, 2015 at 10:08 PM, Jörg Wagner wrote:
> Hello!
>
> Regarding the recommended amount of
In the trunk code mirror maker provides the ability to filter out messages
on demand by supplying a message handler.
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/MirrorMaker.scala#L443
On Wed, 26 Aug 2015 at 11:35 Binh Nguyen Van binhn...@gmail.com wrote:
Hi all,
Correct me if I m wrong. If compaction is used +1 to indicate next offset
is no longer valid. For the compacted section the offset is not increasing
sequentially. i think you need to call the next offset of the last
processed record to figure out what the next offset will be
On Wed, 29 Jul 2015
James,
You can reference confluent IO schema registry implementation.
http://docs.confluent.io/1.0/schema-registry/docs/index.html
It does similar thing as what you described. A REST front end that serves
data from a compacted topic and HA is also provided in the solution.
On Tue, 21 Jul 2015
The OOME issue may be caused
by org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holding
unnecessary byte[] value. Can you apply the patch in below JIRA and try
again?
https://issues.apache.org/jira/browse/KAFKA-2281
On Wed, 15 Jul 2015 at 06:42 francesco vigotti
org.apache.kafka.clients.tools.ProducerPerformance resides in
kafka-clients-0.8.2.1.jar.
You need to make sure the jar exists in $KAFKA_HOME/libs/. I use
kafka_2.10-0.8.2.1
too and here is the output
% bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
USAGE: java
Hi team,
The trunk code of mirror maker now uses the old consumer API, Is there any
plan to use new Java consumer api in mirror maker?
if throwing exception to user on this exception is a good
handling or not. What are user supposed to do in that case other than
retry?
Thanks,
Jiangjie (Becket) Qin
On 7/12/15, 7:16 PM, tao xiao xiaotao...@gmail.com wrote:
We saw the error again in our cluster. Anyone has the same issue
We saw the error again in our cluster. Anyone has the same issue before?
On Fri, 10 Jul 2015 at 13:26 tao xiao xiaotao...@gmail.com wrote:
Bump the thread. Any help would be appreciated.
On Wed, 8 Jul 2015 at 20:09 tao xiao xiaotao...@gmail.com wrote:
Additional info
Kafka version
this. But it looks like a work around. We need to check
why this happens exactly and get to the root cause. What do you think?
Getting to the root cause of this might be really useful.
Thanks,
Mayuresh
On Sun, Jul 12, 2015 at 8:45 PM, tao xiao xiaotao...@gmail.com wrote:
Restart the consumers does
Bump the thread. Any help would be appreciated.
On Wed, 8 Jul 2015 at 20:09 tao xiao xiaotao...@gmail.com wrote:
Additional info
Kafka version: 0.8.2.1
zookeeper: 3.4.6
On Wed, 8 Jul 2015 at 20:07 tao xiao xiaotao...@gmail.com wrote:
Hi team,
I have 10 high level consumers connecting
You can follow the instruction listed here
http://kafka.apache.org/contact.html
In short send an email to users-unsubscr...@kafka.apache.org
On Thu, 9 Jul 2015 at 18:52 Monika Garg gargmon...@gmail.com wrote:
Still I am not able to unsubscribe and still getting mails.
Please help me out as
Additional info
Kafka version: 0.8.2.1
zookeeper: 3.4.6
On Wed, 8 Jul 2015 at 20:07 tao xiao xiaotao...@gmail.com wrote:
Hi team,
I have 10 high level consumers connecting to Kafka and one of them kept
complaining conflicted ephemeral node for about 8 hours. The log was
filled with below
Hi team,
I have 10 high level consumers connecting to Kafka and one of them kept
complaining conflicted ephemeral node for about 8 hours. The log was
filled with below exception
[2015-07-07 14:03:51,615] INFO conflict in
/consumers/group/ids/test-1435856975563-9a9fdc6c data:
That is so cool. Thank you
On Sun, 28 Jun 2015 at 04:29 Guozhang Wang wangg...@gmail.com wrote:
Tao, I have added you to the contributor list of Kafka so you can assign
tickets to yourself now.
I will review the patch soon.
Guozhang
On Thu, Jun 25, 2015 at 2:54 AM, tao xiao xiaotao
Patch updated. please review
On Mon, 22 Jun 2015 at 12:24 tao xiao xiaotao...@gmail.com wrote:
Yes, you are right. Will update the patch
On Mon, Jun 22, 2015 at 12:16 PM Jiangjie Qin j...@linkedin.com.invalid
wrote:
Should we still store the value bytes when logAsString is set to TRUE
Hi,
I have 3 high level consumers with the same group id. One of the consumer
goes down, I know rebalance will kick in in the remaining two consumers.
What happens if one of the remaining consumers is very slow during
rebalancing and it hasn't released ownership of some of the topics will the
Carl,
I double if the change you proposed will have at-least-once guarantee.
consumedOffset
is the next offset of the message that is being returned from
iterator.next(). For example the message returned is A with offset 1 and
then consumedOffset will be 2 set to currentTopicInfo. While the
ErrorLoggingCallback needs some change, though. Can we only
store the value bytes when logAsString is set to true? That looks more
reasonable to me.
Jiangjie (Becket) Qin
On 6/21/15, 3:02 AM, tao xiao xiaotao...@gmail.com wrote:
Yes, I agree with that. It is even better if we can supply our own
Yes, you are right. Will update the patch
On Mon, Jun 22, 2015 at 12:16 PM Jiangjie Qin j...@linkedin.com.invalid
wrote:
Should we still store the value bytes when logAsString is set to TRUE and
only store the length when logAsString is set to FALSE.
On 6/21/15, 7:29 PM, tao xiao xiaotao
itself but just the length if people agree that
for MM we probably are not interested in its message value in callback.
Thoughts?
Guozhang
On Wed, Jun 17, 2015 at 1:06 AM, tao xiao xiaotao...@gmail.com wrote:
Thank you for the reply.
Patch submitted https://issues.apache.org/jira
that if interested.
Thanks,
Jiangjie (Becket) Qin
On 6/13/15, 11:39 AM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I am using mirror maker in trunk to replica data across two data centers.
While the destination broker was having busy load and unresponsive the
send
rate of mirror maker was very
Hi,
I am using mirror maker in trunk to replica data across two data centers.
While the destination broker was having busy load and unresponsive the send
rate of mirror maker was very low and the available producer buffer was
quickly filled up. At the end mirror maker threw OOME. Detailed
Hi,
Just wondering why setConsumerRebalanceListener is not exposed
in kafka.javaapi.consumer.ConsumerConnector? In the latest trunk code
setConsumerRebalanceListener is
in kafka.javaapi.consumer.ZookeeperConsumerConnector but not in
kafka.javaapi.consumer.ConsumerConnector which makes the method
Hi,
I have two mirror makers A and B both subscripting to the same whitelist.
During topic rebalancing one of the mirror maker A encountered
ZkNoNodeException and then stopped all connections. but mirror maker B
didn't pick up the topics that were consumed by A and left some of the
topics
I use commit 9e894aa0173b14d64a900bcf780d6b7809368384 from trunk code
On Wed, 10 Jun 2015 at 01:09 Jiangjie Qin j...@linkedin.com.invalid wrote:
Which version of MM are you running?
On 6/9/15, 4:49 AM, tao xiao xiaotao...@gmail.com wrote:
Hi,
I have two mirror makers A and B both
to leader, and once when
checking for acks. The first error is thrown if we detect a small ISR
before writing to the leader. The second if the ISR shrank after we
wrote to the leader but before we got enough acks.
Gwen
On Mon, Jun 8, 2015 at 2:51 AM, tao xiao xiaotao...@gmail.com wrote:
Hi
Hi team,
What is the difference between producer error NOT_ENOUGH_REPLICAS and
NOT_ENOUGH_REPLICAS_AFTER_APPEND? Does the later one imply that the message
has been written to the leader log successfully? If I have retry turned on
in producer does it mean that duplicated messages may be written to
The default behavior of high level consumer is to wait until next message
come along. If you want to change this behavior you can change the consumer
setting consumer.timeout.ms to some value that is greater than -1
On Mon, 25 May 2015 at 16:57 Ganesh Nikam ganesh.ni...@gslab.com wrote:
HI All,
1 - 100 of 189 matches
Mail list logo