Did you have any heavy logic in the rebalance callback? There are some
observed issues that have been fixed in later versions such as:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
(do not falls out of the rebalance during r
I am also facing the same issue .
Regards,
Abhimanyu
On Thu, May 18, 2017 at 5:29 AM, Cédric Chantepie <
c.chante...@yahoo.fr.invalid> wrote:
> Hi,
>
> I have a test app using Java lib for consumers with Kafka 0.10, using
> Kafka storage for offset.
>
> This app is managing 190 consumers, accro
Hi,
I have a test app using Java lib for consumers with Kafka 0.10, using Kafka
storage for offset.
This app is managing 190 consumers, accross 19 different consumer group,
against 12 distinct topics (details bellow).
When one app instance is starting, with 40 partitions per topic, it takes ~1
I'm not too familiar with Spark but the "earliest"/"latest" configuration
is only relevant if your consumer does not hold a valid offset.
If you read up to offset N, when you restart you'll start from N.
If you start a new consumer then it has no offset, that's when the above
configuration takes e
Another quick question:
Say we chose to add our customer's certificates directly to our brokers
trust store and vice verse, could that work ? There is no documentation on
Kafka or Confluent site for this ?
Thanks.
On Wed, May 17, 2017 at 1:56 PM, Rajini Sivaram
wrote:
> Raghav,
>
> 1. Yes, yo
Did you try out Kafka Streams API instead of wrapping the consumer? It
does support Lambdas already:
https://github.com/confluentinc/examples/blob/3.2.x/kafka-streams/src/main/java/io/confluent/examples/streams/MapFunctionLambdaExample.java#L126
Full docs: http://docs.confluent.io/current/streams
Raghav,
1. Yes, your customers can use certificates signed by a trusted authority.
You can simply omit the truststore configuration for your broker in
server.properties, and Kafka would use the default, which will trust the
client certificates. If your brokers are using SSL for inter-broker
commun
Hello,
I've been looking at writing a Java 8 streams API wrapper for the Kafka
consumer. Since this seems like a common use case I was wondering if
someone in the user community had already begun a project like this.
My goal is to be able to get back a Stream> wrapping
the results of #poll() whic
Hi, list.
I'm trying to re-process a topic in Kafka but when I request for earliest
offsets. The code below always returns the same value as latest offsets (if
I replace OffsetRequest.EarliestTime() to OffsetRequest.LatestTime()).
Is there something that I missing? I'm pretty sure that this code
One follow up questions Rajini:
1. Can we use some other mechanism like have our customer's use a well
known CA which JKS understands, and in that case we don't have to ask our
customers to do this certificate-in and certificate-out thing ? I am just
trying to understand if we can make our custome
Sachin,
We are discussing how to work around KAFKA-4740 for poison pill records:
https://issues.apache.org/jira/browse/KAFKA-5157
And Please share your scenario and your opinions on the solution there.
Guozhang
On Tue, May 16, 2017 at 9:50 PM, Sachin Mittal wrote:
> Folks is there any updat
Hi
I was wondering if something like this was possible. I'd like to be able to use
partitions to gain some IO parallelism, but certain sets of partitions should
not be distributed across different machines. Let's say I have data that can be
processed by time bucket, but I'd like each day's data
That's a great tip! Thank you
On Wed, May 17, 2017 at 9:38 AM Matthias J. Sax
wrote:
> Hey,
>
> if you need to request topic creation in advance, I would recommend to
> do manual re-partitioning via through() -- this allows you to control
> the topic names and should make your setup more robust.
Hey,
at the moment there is no official API for that -- however, using
`KSTreamBuilder#table()` we internally do the exact some thing -- we
don't create an additional changelog topic, but use the original input
topic for that.
Thus, it might make sense to expose this as official API at Processor
Hey,
if you need to request topic creation in advance, I would recommend to
do manual re-partitioning via through() -- this allows you to control
the topic names and should make your setup more robust.
Eg.
stream.selectKey().through("my-own-topic-name").groupByKey()...
For this case, Streams wi
Hi Group,
Recently I am trying to turn Kafka write performance to improve throughput. On
my Kafka broker, there are 3 disks (7200 RPM).
For one disk, the Kafka write throughput can reach 150MB/s. In my opinion, if I
send message to topic test_p3 (which has 3 partitions located on different disk
I’ve seen where setting network configurations within the OS can help mitigate
some of the “Too many open files” issue as well.
Try changing the following items on the OS to try to have used network
connections close as quickly as possible in order to keep file handle use down:
sysctl -w
Fathima,
In 0.11 there will be such a mechanism (see KIP-98), but in current
versions, you have to eat the duplicates if you want to not lose messages.
On Wed, May 17, 2017 at 5:31 AM, Fathima Amara wrote:
> Hi Mathieu,
>
> Thanks for replying. I've already tried by setting "retries" to higher
I just upgraded Kafka Streams to 0.10.2.1 and have the exact same symptom:
new SST files keep getting created and old ones are never deleted. Note
that when I cleanly exit my streams application, all disk space is almost
instantly reclaimed, and the total size of the database becomes about the
amou
Hi,
You should upgrade Kafka versions, this was a bug fixed in KAFKA-3894:
https://issues.apache.org/jira/browse/KAFKA-3894
Generally it's a very good idea to keep on top of Kafka version upgrades,
there are numerous bugs fixed with every release, and it's stability goes
up each time.
On Tue, Ma
Definitely! Thanks, I'll try this.
On Wed, May 17, 2017 at 10:59 AM, Damian Guy wrote:
> Sorry misread your question!
> If the local state is destroyed there will be no checkpoint file and the
> input topic will be read from the earliest offset. So it will restore all
> state.
>
> On Wed, 17 May
Hi Group,
Recently I am trying to turn Kafka write performance to improve throughput. On
my Kafka broker, there are 3 disks (7200 RPM).
For one disk, the Kafka write throughput can reach 150MB/s. In my opinion, if I
send message to topic test_p3 (which has 3 partitions located on different disk
Sorry misread your question!
If the local state is destroyed there will be no checkpoint file and the
input topic will be read from the earliest offset. So it will restore all
state.
On Wed, 17 May 2017 at 09:57 Damian Guy wrote:
> Hi Frank,
>
> Stores.create("store")
> .withKeys(Serdes.
Hi Frank,
Stores.create("store")
.withKeys(Serdes.String())
.withValues(Serdes.String())
.persistent()
.disableLogging()
.build();
Does that help?
Thanks,
Damian
On Wed, 17 May 2017 at 06:09 Frank Lyaruu wrote:
> Hi Kafka people,
>
> I'm using the lo
Hi Sini,
This is beyond the score of KIP-138 but
https://issues.apache.org/jira/browse/KAFKA-3514 exists to track such
improvements
Thanks,
Michal
On 17 May 2017 5:10 p.m., Peter Sinoros Szabo
wrote:
Hi,
I see, now its clear why the repeated punctuations use the same time value
in that
Hi,
I see, now its clear why the repeated punctuations use the same time value
in that case.
Do you have a JIRA ticket to track improvement ideas for that?
It would be great to have an option to:
- advance the stream time before calling the process() on a new record -
this would prevent to pr
+1
On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava
wrote:
> +1 (binding)
>
> I mentioned this in the PR that triggered this:
>
> > KIP is accurate, though this is one of those things that we should
> probably get a KIP for a standard set of config options across all tools so
> additions like
Awesome. Thanks Bharat.
2017-05-16 21:27 GMT+02:00 BigData dev :
> Hi,
> key.converter and value.converter can be overridden at connector level.
> This has been supported from Kafka 0.10.1.0
>
> For more info refer to this KIP
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 75+-+Add+p
28 matches
Mail list logo