gt; Do you have a specific scenario in mind that would require single-partition
> topics?
>
> Guozhang
>
>
>
> On Mon, Jul 7, 2014 at 7:43 AM, Jason Rosenberg wrote:
>
> > I've been looking at the new consumer api outlined here:
> >
> >
> https://c
I've been looking at the new consumer api outlined here:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design
One issue in the current high-level consumer, is that it does not do a good
job of distributing a set of topics between multiple consumers, unless each
topic
What's the status for an 0.8.2 release? We are currently using 0.8.0, and
would like to upgrade to take advantage of some of the per-topic retention
options available now in 0.8.1.
However, we'd also like to take advantage of some fixes coming in 0.8.2
(e.g. deleting topics).
Also, we have been
So, I think there are 2 different types of errors you mention. The
first is data-dependent (e.g. it's corrupt or some such). So, there's
no reason to block consumption of other messages that are likely to be
successful, while the data-dependent one won't fix itself no matter
times you retry. So,
In my case, we just rolled out a separate 0.8 cluster, and migrated
producers to it over time (took several weeks to get everything
updated to the new cluster). In the transition, we had consumers
running for both clusters. Once no traffic was flowing on the old
cluster, we then shut down the 0.7
Please be sure to update the online config docs with this change! The
per topic options are still listed there
Jason
On Thu, Jan 16, 2014 at 9:57 PM, Ben Summer wrote:
> I see. I don't have version 0.8.1 yet. We just updated to 0.8.0 from beta
> after it became the "stable version".
> Good
12 zookeepers seems like a lot..and you should always, by default,
prefer an odd number of zookeepers. Consumers negotiate with each
other for partition ownership, via zookeeper.
Jason
On Fri, Jan 10, 2014 at 9:20 PM, Guozhang Wang wrote:
> Can you post the consumer logs for the long-ti
Not sure, but I'll try (it's a bit difficult to create a test-case, because
it requires a good bit of integration testing, etc.).
Jason
On Sat, Jan 11, 2014 at 12:06 AM, Jun Rao wrote:
> Do you think you can reproduce this easily?
>
> Thanks,
>
> Jun
>
>
, Jan 10, 2014 at 11:06 AM, Jun Rao wrote:
> Could you increase parallelism on the consumers?
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 9, 2014 at 1:22 PM, Jason Rosenberg wrote:
>
> > The consumption rate is a little better after the refactoring. The main
> &
y
> after the refactoring?
>
> Thanks,
>
> Jun
>
>
> On Wed, Jan 8, 2014 at 10:44 AM, Jason Rosenberg wrote:
>
> > Yes, it's happening continuously, at the moment (although I'm expecting
> the
> > consumer to catch up soon)
> >
> > It seem
stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> /
>
>
> On Wed, Jan 8, 2014 at 1:44 PM, Jason Rosenberg wrote:
>
> > Yes, it's happening continuously, at the moment (although I'm expecting
> the
> > consumer to catch up soon).
xception warning. The offset mismatch error should never
> happen. It could be that OffsetOutOfRangeException exposed a bug. Do you
> think you can reproduce this easily?
>
> Thanks,
>
> Jun
>
>
> On Tue, Jan 7, 2014 at 9:29 PM, Jason Rosenberg wrote:
>
> >
Wed, Jan 8, 2014 at 12:07 AM, Jun Rao wrote:
> The WARN and ERROR may not be completely correlated. Could it be that the
> consumer is slow and couldn't keep up with the produced data?
>
> Thanks,
>
> Jun
>
>
> On Tue, Jan 7, 2014 at 6:47 PM, Jason Rosenberg wrot
I've filed https://issues.apache.org/jira/browse/KAFKA-1200, to address the
incosistent log-level issue.
On Tue, Jan 7, 2014 at 9:47 PM, Jason Rosenberg wrote:
> So, sometimes I just get the WARN from the ConsumerFetcherThread (as
> previously noted, above), e.g.:
>
> 2014-0
t possible lost data if I see the
second ERROR log line?
Jason
On Tue, Dec 24, 2013 at 3:49 PM, Jason Rosenberg wrote:
> But I assume this would not be normally you'd want to log (every
> incoming producer request?). Maybe just for debugging? Or is it only
> for consumer fetch reque
Thanks Joe,
I can confirm that your patch works for me, as applied to 0.8.0.
Jason
On Fri, Dec 20, 2013 at 6:28 PM, Jason Rosenberg wrote:
> Thanks Joe,
>
> I generally build locally, and upload to our maven proxy (using a custom
> pom).
>
> I haven't yet had luck using
Hi Pushkar,
We've been using zk 3.4.5 for several months now, without any
problems, in production.
Jason
On Thu, Jan 2, 2014 at 1:15 AM, pushkar priyadarshi
wrote:
> Hi,
>
> I am starting a fresh deployment of kafka + zookeeper.Looking at zookeeper
> releases find 3.4.5 old and stable enough.Ha
ics whose leader is on that broker. We have seen a
> fetcher being killed by a bug in Kafka. Also, if the broker is slow (e.g.
> due to I/O contention), the fetch rate could also be slower than expected.
>
> Thanks,
>
> Jun
>
>
> On Tue, Dec 24, 2013 at 12:48 PM, Jason R
also be
> recorded.
>
> You can check for "Completed XXX request" in the log files to check the
> request info with the correlation id.
>
> Guozhang
>
>
> On Mon, Dec 23, 2013 at 10:46 PM, Jason Rosenberg wrote:
>
>> Hmmm, it looks like I'm enabling a
gt; I updated http://kafka.apache.org/documentation.html#monitoring
>
> Thanks,
>
> Jun
>
>
> On Mon, Dec 23, 2013 at 10:51 PM, Jason Rosenberg wrote:
>
>> I'm realizing I'm not quite sure what the 'min fetch rate' metrics is
>> indicating, fo
I'm realizing I'm not quite sure what the 'min fetch rate' metrics is
indicating, for consumers. Can someone offer an explanation?
Is it related to the 'max lag' metric?
Jason
le request log? It logs the ip of every request.
>
> Thanks,
>
> Jun
>
>
> On Mon, Dec 23, 2013 at 3:52 PM, Jason Rosenberg wrote:
>
> > Hi Guozhang,
> >
> > I'm not sure I understand your first answer. I don't see anything
> > regarding t
; Guozhang
>
>
> On Mon, Dec 23, 2013 at 3:09 PM, Jason Rosenberg wrote:
>
> > In our broker logs, we occasionally see errors like this:
> >
> > 2013-12-23 05:02:08,456 ERROR [kafka-request-handler-2] server.KafkaApis
> -
> > [KafkaApi-45] Error when proces
In our broker logs, we occasionally see errors like this:
2013-12-23 05:02:08,456 ERROR [kafka-request-handler-2] server.KafkaApis -
[KafkaApi-45] Error when processing fetch request for partition [mytopic,0]
offset 204243601 from consumer with correlation id 130341
kafka.common.OffsetOutOfRangeEx
We recently upgraded to 3.4.5, so far without incident. But I'd be
interested to know whether we confirm that there are known problems with
this!
Jason
On Mon, Dec 23, 2013 at 2:04 PM, Drew Goya wrote:
> Thanks, I migrated our ZK cluster over to 3.3 this weekend. Hopefully that
> does it!
>
Is it possible to expose programmatically, the number of brokers in ISR for
each partition? We could make this a gating thing before shutting down a
broker gracefully, to make sure things are in good shape.I guess
controlled shutdown assures this anyway, in a sense.
Jason
On Mon, Dec 23
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ****/
>
>
> On Wed, Dec 18, 2013 at
How stable is 0.8.1, will there be a 'release' of this soon, or are there
still significant open issues?
Thanks,
Jason
On Thu, Dec 19, 2013 at 12:17 PM, Guozhang Wang wrote:
> Libo, yes the upgrade from 0.8 to 0.8.1 can be done in place.
>
> Guozhang
>
>
> On Thu, Dec 19, 2013 at 8:57 AM, Yu,
*
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ****/
>
>
> On Wed, Dec 18, 2013 at 8:15 AM,
thingshadoop <http://www.twitter.com/allthingshadoop>
> /
>
>
> On Tue, Dec 17, 2013 at 1:16 AM, Jason Rosenberg wrote:
>
> > Ping
> >
> > Any thoughts on this?
> >
> > Seems like a bug, but then aga
to a whitelist :).
Jason
On Thu, Dec 12, 2013 at 1:01 PM, Jason Rosenberg wrote:
> All, I've filed: https://issues.apache.org/jira/browse/KAFKA-1180
>
> We are needing to create a stream selector that essentially combines the
> logic of the BlackList and WhiteList classes
This is something we do routinely. We wrap the KafkaServer from java, it
works pretty well. You can use the KafkaServerStartable class, or the
KafkaServer class directly (in which case, you might need to do what we did
as described in: https://issues.apache.org/jira/browse/KAFKA-1101)
On Thu
All, I've filed: https://issues.apache.org/jira/browse/KAFKA-1180
We are needing to create a stream selector that essentially combines the
logic of the BlackList and WhiteList classes. That is, we want to select a
topic that contains a certain prefix, as long as it doesn't also contain a
seconda
**
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> /
>
>
> On Wed, Dec 4, 2013 at 10:2
Here's the jira: https://issues.apache.org/jira/browse/KAFKA-1163
"I was honestly not aware folks still used 2.8.0 and there have been talks
about discontinuing that."
Heh, that sounds amazing, considering that's the binary release version
you've put up for download :)
I am using that too for b
So,
It looks like there is an issue with using snappy version 1.0.4.1, with
java 7, on MacOSX, which is fixed in version 1.0.5. We've verified this.
https://github.com/ptaoussanis/carmine/issues/5
Perhaps 1.0.5 should be the default for kafka 0.8 (or maybe too late for
that?).
I've filed: htt
ll have my code ready to look at soon -- I think its coded
> but I need to clean it up some ...
>
>
> On Fri, Nov 22, 2013 at 5:38 PM, Jason Rosenberg wrote:
> > I think that the problem is, you don't know which partitions a thread is
> > currently 'owning'
itions.
>
> what am I overlooking?
>
> thanks again for taking the time to work through this with me, I hope
> this helpful for a broader audience :)
>
> imran
>
> On Fri, Nov 22, 2013 at 2:37 PM, Jason Rosenberg wrote:
> > With the high-level consumer, the design i
> would just put its offsets in a queue and keep going. (You can
> probably see where this is going for batches ...)
>
>
> thanks a lot, this discussion has been fascinating!
>
> Imran
>
>
> On Fri, Nov 22, 2013 at 1:06 AM, Jason Rosenberg wrote:
> > Again,
>
e
> higher throughput (no need to have a full barrier that stops all the
> workers simultaneously). I've been getting influenced by the akka way
> of thinking. I think I will be able to share some example code
> tomorrow, maybe you and others can take a look and see if you think
&g
Imran,
Remember too, that different threads will always be processing a different
set of partitions. No 2 threads will ever own the same partition,
simultaneously.
A consumer connector can own many partitions (split among its threads),
each with a different offset. So, yes, it is complicated, a
Hi Imran,
The thing to do, is not have an asynchronous background thread for
committing. Instead have a time based "shouldCommit()" function, and
commit periodically, synchronously.
If you have a bursty producer, then you can set a timeout (
consumer.timeout.ms), so that your iter.hasNext() call
Our metrics name for
> maxlag includes the client id.
>
> Thanks,
>
> Jun
>
>
> On Thu, Nov 14, 2013 at 7:47 PM, Jason Rosenberg wrote:
>
> > Hi,
> >
> > We are experimenting with having multiple consumer connectors running in
> > the same pro
Is this something you can make available/open source?
On Fri, Nov 15, 2013 at 1:03 PM, Rajasekar Elango wrote:
> Hi Jonathan
>
> We forked kafka to add SSL feature. It not part of kafka official release
>
> Sent from my iPhone
>
> On Nov 15, 2013, at 12:32 PM, Jonathan Hodges wrote:
>
> > Hi,
>
n Fri, Nov 15, 2013 at 9:58 AM, Neha Narkhede wrote:
> I think ConsumerFetcherManager metric will report data for all of the
> connectors with the same group.id transparently.
>
> Thanks,
> Neha
> On Nov 14, 2013 7:48 PM, "Jason Rosenberg" wrote:
>
> > Hi
Hi,
We are experimenting with having multiple consumer connectors running in
the same process, under the same groupId (but with different topic filters).
I'm wondering what the expected effect of this is with metrics, like
ConsumerFetcherManager.-MaxLag
It looks like in AbstractFetcherManager, t
e a way the broker can get into this situation.
>
> Thanks,
> Neha
>
>
> On Tue, Nov 5, 2013 at 4:58 PM, Jason Rosenberg wrote:
>
> > I don't know if I have a way to see the access logs on the
> LB..(still
> > trying to track that down).
> >
>
if the node has not been placed back into
> rotation, at the metadata vip?
>
> Hmmm... if the broker is not placed in the metadata vip, how did it end up
> receiving metadata requests? You may want to investigate that by checking
> the public access logs.
>
> Thanks,
> Neha
&
> Thanks,
> Neha
>
>
> On Mon, Nov 4, 2013 at 11:47 AM, Jason Rosenberg wrote:
>
> > Ok,
> >
> > After adding a delay before enabling a freshly started broker in the
> > metadata vip that clients use, it seems to have drastically reduced the
> > numb
kafka.apache.org/documentation.html#monitoring. Look for
> *QueueTimeMs
>
> Thanks,
> Neha
>
>
> On Fri, Nov 1, 2013 at 12:14 PM, Jason Rosenberg wrote:
>
> > Neha,
> >
> > This cluster has on the order of 750 topics.
> >
> > It looks like if I add a
used to detect a slow broker.
>
> Thanks,
>
> Jun
>
>
> On Fri, Nov 1, 2013 at 10:36 PM, Jason Rosenberg wrote:
>
> > In response to Joel's point, I think I do understand that messages can be
> > lost, if in fact we have dropped down to only 1 member in the ISR at
In response to Joel's point, I think I do understand that messages can be
lost, if in fact we have dropped down to only 1 member in the ISR at the
time the message is written, and then that 1 node goes down.
What I'm not clear on, is the conditions under which a node can drop out of
the ISR. You
on all requests? Also how many topics
> existed on this cluster?
>
> Thanks,
> Neha
> On Oct 31, 2013 10:56 PM, "Jason Rosenberg" wrote:
>
> > In this case, it appears to have gone on for 104 seconds. Should it take
> > that long? It doesn't seem t
gt; On Thu, Oct 31, 2013 at 9:56 PM, Jason Rosenberg wrote:
>
> > Ok,
> >
> > So, I can safely ignore these, it sounds like. I don't see any
> > corresponding logging around it subsequently not succeeding to actually
> > create the topic in zk.
> >
>
in ZK. These INFO logging should be transient
> though.
>
> Thanks,
>
> Jun
>
>
> On Thu, Oct 31, 2013 at 8:41 PM, Jason Rosenberg wrote:
>
> > Some times during a rolling restart, I see lots of these messages during
> > the restart (these happened in the logs o
Some times during a rolling restart, I see lots of these messages during
the restart (these happened in the logs of the 2nd server in the cluster to
restart (broker 11), after having restarted started, and during the time
the 3rd node (broker 9) is doing a controlled shutdown before stopping).
Is
I've filed: https://issues.apache.org/jira/browse/KAFKA-1108
On Tue, Oct 29, 2013 at 4:29 PM, Jason Rosenberg wrote:
> Here's another exception I see during controlled shutdown (this time there
> was not an unclean shutdown problem). Should I be concerned about this
> exc
ControlledShutdownRequest(KafkaApis.scala:133)
at kafka.server.KafkaApis.handle(KafkaApis.scala:72)
at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
at java.lang.Thread.run(Thread.java:662)
Jason
On Fri, Oct 25, 2013 at 11:51 PM, Jason Rosenberg wrote:
>
; > retry on all non-fatal exceptions you have the "at least once"
> guarantee;
> > > and even if you set request.required.acks to -1 and you do not retry,
> you
> > > will not get "at least once".
> > >
> > > As you said, setting r
t this has nothing to do with "at least
> once" guarantee.
>
> Guozhang
>
>
> On Fri, Oct 25, 2013 at 8:55 PM, Jason Rosenberg wrote:
>
> > Just to clarify, I think in order to get 'at least once' guarantees, you
> > must produce messages with
Just to clarify, I think in order to get 'at least once' guarantees, you
must produce messages with 'request.required.acks=-1'. Otherwise, you
can't be 100% sure the message was received by all ISR replicas.
On Fri, Oct 25, 2013 at 9:56 PM, Kane Kane wrote:
> Thanks Guozhang, it makes sense if
On Fri, Oct 25, 2013 at 9:16 PM, Joel Koshy wrote:
>
> Unclean shutdown could result in data loss - since you are moving
> leadership to a replica that has fallen out of ISR. i.e., it's log end
> offset is behind the last committed message to this partition.
>
>
But if data is written with 'reque
r
is it something more difficult to recover from?
On Fri, Oct 25, 2013 at 12:51 PM, Jason Rosenberg wrote:
> Neha,
>
> It looks like the StateChangeLogMergerTool takes state change logs as
> input. I'm not sure I know where those live? (Maybe update the doc on
> that wiki pag
,
> Neha
>
>
> On Fri, Oct 25, 2013 at 8:26 AM, Jason Rosenberg wrote:
>
> > Ok,
> >
> > Looking at the controlled shutdown code, it appears that it can fail with
> > an IOException too, in which case it won't report the remaining
> partitions
> &
failed.
>
> Thanks,
> Neha
> On Oct 25, 2013 1:18 AM, "Jason Rosenberg" wrote:
>
> > I'm running into an issue where sometimes, the controlled shutdown fails
> to
> > complete after the default 3 retry attempts. This ended up in one case,
> >
I'm running into an issue where sometimes, the controlled shutdown fails to
complete after the default 3 retry attempts. This ended up in one case,
with a broker under going an unclean shutdown, and then it was in a rather
bad state after restart. Producers would connect to the metadata vip,
stil
https://issues.apache.org/jira/browse/KAFKA-1101
filed: https://issues.apache.org/jira/browse/KAFKA-1100
On Wed, Oct 23, 2013 at 12:28 AM, Jun Rao wrote:
> Yes, those metrics names could be simplified. Could you file a jira?
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 22, 2013 at 8:55 PM, Jason Rosenberg wrote:
>
>
I've noticed that there are several metrics that seem useful for monitoring
overtime, but which contain generational timestamps in the metric name.
We are using yammer metrics libraries to send metrics data in a background
thread every 10 seconds (to kafka actually), and then they eventually end
u
sumer%3F
> ,
> which may be related to your issue.
>
> Thanks,
>
> Jun
>
>
> On Sun, Oct 20, 2013 at 2:48 AM, Jason Rosenberg wrote:
>
> > Ok,
> >
> > So here's an outline of what I think seems to have happened.
> >
> > I have a consumer
te time
re-polling all those (and getting nothing) before coming back to the topics
that are lagging? Perhaps having a larger fetch size would help here?
Jason
On Sat, Oct 19, 2013 at 6:24 PM, Jason Rosenberg wrote:
> I'll try to, next time it hangs!
>
>
> On Sat, Oct 19, 201
I'll try to, next time it hangs!
On Sat, Oct 19, 2013 at 4:04 PM, Neha Narkhede wrote:
> Can you send around a thread dump of the halted consumer process?
>
>
>
> On Sat, Oct 19, 2013 at 12:16 PM, Jason Rosenberg
> wrote:
>
> > The latest HEAD does seem to so
gt; On Fri, Oct 18, 2013 at 4:03 PM, Jason Rosenberg wrote:
>
> > Will the 0.8 release come from the HEAD of the 0.8 branch? I'd like to
> > experiment with it, to see if it solves some of the issues I'm seeing,
> with
> > consumers refusing to consume new messages
Will the 0.8 release come from the HEAD of the 0.8 branch? I'd like to
experiment with it, to see if it solves some of the issues I'm seeing, with
consumers refusing to consume new messages. We've been using the beta1
version.
I remember mention there was a Jira issues along these lines, which w
awesome
On Fri, Oct 18, 2013 at 1:10 AM, Joel Koshy wrote:
> We should be able to get this in after 0.8.1 and probably before the client
> rewrite.
>
> Thanks,
>
> Joel
>
> On Wednesday, October 16, 2013, Jason Rosenberg wrote:
>
> > This looks great. What
on of the offset commit request is small/empty.
>
> Thanks,
>
> Joel
>
>
> On Wed, Oct 16, 2013 at 10:35 AM, Jason Rosenberg
> wrote:
> > That would be great. Additionally, in the new api, it would be awesome
> > augment the default auto-commit functionality to
it will be useful to have some kind of API that
> informs the client when a rebalance is going to happen. We can think about
> this when we do the client rewrite.
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 15, 2013 at 9:21 PM, Jason Rosenberg wrote:
>
> > Jun,
> &g
complete the replica
> assignment since we can't assign more than 1 replica on the same broker.
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 15, 2013 at 1:47 PM, Jason Rosenberg wrote:
>
> > Is there a fundamental reason for not allowing creation of new topics
> while
>
livery (with duplicates ok).
Jason
On Tue, Oct 15, 2013 at 9:00 PM, Jun Rao wrote:
> If auto commit is disabled, the consumer connector won't call commitOffsets
> during rebalancing.
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 15, 2013 at 4:16 PM, Jason Rosenberg wrote:
I'm looking at implementing a synchronous auto offset commit solution.
People have discussed the need for this in previous
threads..Basically, in my consumer loop, I want to make sure a message
has been actually processed before allowing it's offset to be committed.
But I don't want to commit
Is there a fundamental reason for not allowing creation of new topics while
in an under-replicated state? For systems that use automatic topic
creation, it seems like losing a node in this case is akin to the cluster
being unavailable, if one of the nodes goes down, etc.
On Tue, Oct 15, 2013 at
What I would like to see is a way for inactive topics to automatically get
removed after they are inactive for a period of time. That might help in
this case.
I added a comment to this larger jira:
https://issues.apache.org/jira/browse/KAFKA-330
Perhaps it should really be it's own jira entry.
ogical offsets to the
> individual messages inside the compressed message.
>
> Thanks,
> Neha
> On Oct 7, 2013 11:36 PM, "Jason Rosenberg" wrote:
>
> > Neha,
> >
> > Does the broker store messages compressed, even if the producer doesn't
> > comp
Neha,
Does the broker store messages compressed, even if the producer doesn't
compress them when sending them to the broker?
Why does the broker re-compress message batches? Does it not have enough
info from the producer request to know the number of messages in the batch?
Jason
On Mon, Oct 7
Seems the default for this is no 1
Online doc shows 1500
Just curious, why was this value updated?
Jason
simplify the leader
> transition and let the clients handle the retries for requests that have
> not made it to the desired number of replicas, which is configurable.
>
> We can discuss that in a JIRA if that helps. May be other committers have
> more ideas.
>
> Thanks,
> N
On Sun, Oct 6, 2013 at 4:08 AM, Neha Narkhede wrote:
> Does the
> leader just wait for the followers in the ISR to consume?
>
> That's right. Until that is done, the producer does not get an ack back. It
> has an option of retrying if the previous request times out or fails.
>
>
Ok, so if I initia
icas is shut down, the ISR reduces to remove the replica being shut
> down and the messages will be committed using the new ISR.
>
> Thanks,
> Neha
>
>
> On Fri, Oct 4, 2013 at 11:51 PM, Jason Rosenberg wrote:
>
> > Neha,
> >
> > I'm not sure I und
fore the follower
> gets a chance to copy the message. Can you try your test with num acks set
> to -1 ?
>
> Thanks,
> Neha
> On Oct 4, 2013 1:21 PM, "Jason Rosenberg" wrote:
>
> > All,
> >
> > I'm having an issue with an integration test I
All,
I'm having an issue with an integration test I've setup. This is using
0.8-beta1.
The test is to verify that no messages are dropped (or the sender gets an
exception thrown back if failure), while doing a rolling restart of a
cluster of 2 brokers.
The producer is configured to use 'request
oo much memory or big latency?
> >
> > Regards,
> >
> > Libo
> >
> >
> > -Original Message-
> > From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
> > Sent: Sunday, September 08, 2013 12:46 PM
> > To: users@kafka.apache.org
> > Subjec
I filed this, to address the need for allowing parallelism when consuming
multiple single-partition topics selected with a topic filter:
https://issues.apache.org/jira/browse/KAFKA-1072
On Thu, Oct 3, 2013 at 10:56 AM, Jason Rosenberg wrote:
> Ah,
>
> So this is exposed directly in t
7;s fixable. Since we plan to rewrite the consumer client code in the
> > near
> > > future, it could be considered at that point.
> > >
> > > If you issue a metadata request with an empty topic list, you will get
> > back
> > > the metadata of all topi
issue a metadata request with an empty topic list, you will get back
> the metadata of all topics.
>
> Thanks,
>
> Jun
>
>
> On Wed, Oct 2, 2013 at 1:28 PM, Jason Rosenberg wrote:
>
> > How hard would it be to fix this issue, where we have a topic filter that
> >
How hard would it be to fix this issue, where we have a topic filter that
matches multiple topics, for the load to be distributed over multiple
threads, and over multiple consumers? For some reason, I had thought this
issue was fixed in 0.8, but I guess not?
I am currently using a single partitio
wrote:
> Jason - You should be able to solve that with Jay's proposal below. If you
> just persist the id in a meta file, you can copy the meta file over to the
> new broker and broker will not re-generate another id.
>
> On 10/2/13 11:10 AM, "Jason Rosenberg" wrote
I recently moved away from generating a unique brokerId for each node, in
favor of assigning ids in configuration. The reason for this, is that in
0.8, there isn't a convenient way yet to reassign partitions to a new
brokerid, should one broker have a failure. So, it seems the only
work-around at
I've been doing some testing, trying to understand how the
max.message.bytes works, with respect to sending batches of messages. In a
previous discussion, there appeared to be a suggestion that one work around
when triggering a MessageSizeTooLargeException is to reduce the batch size
and resubmit
filed: https://issues.apache.org/jira/browse/KAFKA-1066
On Tue, Sep 24, 2013 at 12:04 PM, Neha Narkhede wrote:
> This makes sense. Please file a JIRA where we can discuss a patch.
>
> Thanks,
> Neha
>
>
> On Tue, Sep 24, 2013 at 9:00 AM, Jason Rosenberg wrote:
>
>
gt; Thanks,
> Neha
>
>
> On Mon, Sep 23, 2013 at 3:26 PM, Jason Rosenberg wrote:
>
> > Sorry for the crazy long log trace here (feel free to ignore this message
> > :))
> >
> > I'm just wondering if there's an easy way to sensibly reduce the amount
&
101 - 200 of 348 matches
Mail list logo