Hi all,
I'm experiencing a message replay problem in Kafka, which I suspect it's being
caused by corrupted consumer offset, which is caused by corrupted group
metadata.
Background:
* Kafka cluster of 3 brokers with version 0.11.0.0.
* Zookeeper cluster of 3 nodes with version
Congrats, everyone! Thanks for driving the release, Ewen!
-James
> On Mar 6, 2018, at 1:22 PM, Guozhang Wang wrote:
>
> Ewen, thanks for driving the release!!
>
>
> Guozhang
>
> On Tue, Mar 6, 2018 at 1:14 PM, Ewen Cheslack-Postava wrote:
>
>> The Apache Kafka community is pleased to annou
I want to manually fetch messages from all partitions of a topic. I'm doing
this by:
1. Create a list of TopicPartition - one for each partition of my topic
2. Create KafkConsumer, and call .assign(myTopicPartitionsList)
3. For Each TopicPartition, seek to the offset I want to read
But when I ca
Ewen, thanks for driving the release!!
Guozhang
On Tue, Mar 6, 2018 at 1:14 PM, Ewen Cheslack-Postava wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka
> 1.0.1.
>
> This is a bugfix release for the 1.0 branch that was first released with
> 1.0.0 about 4
+1
Checked signature
Ran test suite - apart from flaky testMetricsLeak, other tests passed.
On Tue, Mar 6, 2018 at 2:45 AM, Damian Guy wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 1.1.0.
>
> This is minor version rele
The Apache Kafka community is pleased to announce the release for Apache Kafka
1.0.1.
This is a bugfix release for the 1.0 branch that was first released with 1.0.0
about 4 months ago. We've fixed 49 issues since that release. Most of these are
non-critical, but in aggregate these fixes will hav
unsubscribe
On Mon, Mar 5, 2018 at 7:23 PM, ? ? wrote:
>
> hi:
> I meet a problem today.
> when I use kafka stream to consumer one topic and do mapValues() method,
> and to another topic then .sometimes throw an error
> this is code sample:
> new StreamsBuilder().stream(xxxtopic, Consumed.wi
Yes.
It's a known bug. You can read details here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in+The+Producer
You can avoid it, by increasing the `request.timeout.ms` parameter for
the producer.
-Matthias
On 3/5/18 9:46 PM, 杰 杨 wrote:
> it seems i d
Thanks for creating the JIRA ticket.
Streams library follows "event-time" concept by default with the metadata
timestamp extractor, expecting the timestamp set in this field reflects
"when the event happens in real-time":
https://kafka.apache.org/10/documentation/streams/core-concepts#streams_tim
Guozhang,
here we go with ticket: https://issues.apache.org/jira/browse/KAFKA-6614
i'd also like to continue discussion little bit further about timestamps.
Was trying to test with broker configured "CreateTime" and got question
about sink topic timestamps, back to example:
KTable, Long> summari
I have run wikifeed example. i have three topics:
wikifeedInputtopicDemo2-10 partitions
wikifeedOutputtopicDemo2-10 partitions
sumoutputeventopicDemo2-5 partitions
i have produced 10 records.but in the
inputTopic(wikifeedInputtopicDemo2) it receives more than 10
records.
can someone explai
I have run wikifeed example. i have three topics:
wikifeedInputtopicDemo2-10 partitions
wikifeedOutputtopicDemo2-10 partitions
sumoutputeventopicDemo2-5 partitions
i have produced 10 records.but in the
inputTopic(wikifeedInputtopicDemo2) it receives more than 10
records.
can someone explai
Hello Kafka users, developers and client-developers,
This is the second candidate for release of Apache Kafka 1.1.0.
This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
Please see the release plan for more details:
https://cwiki.apache.org/confluence/pages/viewpage.action?pag
Thank you, Matthias!
We currently do use kafka consumer and store event time highwatermarks as
offset metadata. This is used during backup procedure, which is to create a
snapshot of the target storage with all events up to certain timestamp and
no other.
As for the API - I guess being able to pr
Hi,
I'm starting a cluster of kafka brokers using Docker (5 brokers for
example, one broker per container). Kafka version 2.12-0.11.0.0, Zookeeper
3.4.10.
*The scenario:*
- Starting 1st broker with config below
*zoo.cfg*
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
c
15 matches
Mail list logo