Good time of the day everyone,
I hope it is an appropriate mailing list, my apologies if not.
We'd like to make an announcement that we open sourced the web interface to
Kafka broker that allows view, search and examine what's going on on the
wire (messages, keys, e.t.c) from the browser (bye,
>
> > Hey, thanks for that Dmitriy! I'll have a look.
> >
> >> On Tue, Jul 24, 2018 at 11:18 AM Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com> wrote:
> >> Not really associated with Sarama.
> >>
> >> But your issues sounds pretty
n(topic, 0, sarama.OffsetOldest)
> if err != nil {
> panic(err)
> }
>
> defer func() {
> if err := client.Close(); err != nil {
> panic(err)
> }
> }()
>
> // Count how many message processed
> msgCount := 0
>
> go func() {
> for {
> select {
> case err := &l
Hey Craig,
what exact problem you have with Sarama client?
On Mon, Jul 23, 2018 at 5:11 PM Craig Ching wrote:
> Hi!
>
> I'm working on debugging a problem with how message timestamps are handled
> in the sarama client. In some cases, the sarama client won't associate a
> timestamp with a
is the code variant for Kafka 1.0.
>
> The example implements a custom (fault-tolerant) state store backed by CMS,
> which is then used in a Transformer. The Transformer is then plugged into
> the DSL for a mix-and-match setup of DSL and Processor API.
>
>
> On Mon, Apr 9,
implement a custom operator.
>
> Btw: here is an example of TopN:
> https://github.com/confluentinc/kafka-streams-
> examples/blob/4.0.0-post/src/main/java/io/confluent/examples/streams/
> TopArticlesExampleDriver.java
>
>
>
> -Matthias
>
> On 4/9/18 4:46 AM,
u check
> if the new count replaces and existing count in the array/list.
>
>
> -Matthias
>
> On 4/6/18 10:36 AM, Dmitriy Vsekhvalnov wrote:
> > Thanks guys,
> >
> > ok, question then - is it possible to use state store with .aggregate()?
> >
> &
ng", could you elaborate a bit more on this logic
> so maybe we can still around it around with the pure high-level DSL?
>
>
> Guozhang
>
>
> On Fri, Apr 6, 2018 at 8:49 AM, Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com>
> wrote:
>
> > Hey, good day eve
Hey, good day everyone,
another kafka-streams friday question.
We hit the wall with DSL implementation and would like to try low-level
Processor API.
What we looking for is to:
- repartition incoming source stream via grouping records by some fields
+ windowed (hourly, daily, e.t.c).
- and
Hi Gary,
don't have experience with other go libs (they seems to be way younger),
but Sarama is quite low level, which is both at same time powerful and to
some extent more complicated to work with.
With pure Sarama client you have to implement wildcard (or pattern based)
topic subscription
in the past.
What the logic behind that? Honestly was expected sink messages to get
"now" timestamp.
On Mon, Mar 5, 2018 at 11:48 PM, Guozhang Wang <wangg...@gmail.com> wrote:
> Sounds great! :)
>
> On Mon, Mar 5, 2018 at 12:28 PM, Dmitriy Vsekhvalnov <
> dvsekhval...@g
> > > it with the append time. So when the messages are fetched by downstream
> > > processors which always use the metadata timestamp extractor, it will
> get
> > > the append timestamp set by brokers.
> > >
> > >
> > > Guozhang
&
metadata and override
> it with the append time. So when the messages are fetched by downstream
> processors which always use the metadata timestamp extractor, it will get
> the append timestamp set by brokers.
>
>
> Guozhang
>
> On Mon, Mar 5, 2018 at 9:53 AM, Dmitriy V
regation should be the same as the
> one in the payload, if you do observe that is not the case, this is
> unexpected. In that case could you share your complete code snippet,
> especially how input stream "in" is defined, and your config properties
> defined for us to
Good morning,
we have simple use-case where we want to count number of events by each
hour grouped by some fields from event itself.
Our event timestamp is embedded into messages itself (json) and we using
trivial custom timestamp extractor (which called and works as expected).
What we facing
error
> happens, Streams would not die but recover from the error automatically.
>
> Thus, I would recommend to upgrade to 1.0 eventually.
>
>
> -Matthias
>
> On 2/27/18 8:06 AM, Dmitriy Vsekhvalnov wrote:
> > Good day everybody,
> >
> > we faced unexpected
Good day everybody,
we faced unexpected kafka-streams application death after 3 months of work
with exception below.
Our setup:
- 2 instances (both died) of kafka-stream app
- kafka-streams 0.11.0.0
- kafka broker 1.0.0
Sounds like re-balanced happened and something went terribly wrong this
port it here if it ever happen again in the future.
> We’ll also upgrade all our clusters to 0.11.0.1 in the next days.
>
> 爛!
>
> > Le 11 oct. 2017 à 17:47, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com> a
> écrit :
> >
> > Yeah just pops up in my list. Thanks,
t;Incorrect consumer offsets after broker
> restart 0.11.0.0" from Phil Luckhurst, it sounds similar.
>
> Thanks,
>
> Ben
>
> On Wed, Oct 11, 2017 at 4:44 PM Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com>
> wrote:
>
> > Hey, want to resurrect this thre
expected that Kafka will fail
symmetrical with respect to any broker.
On Mon, Oct 9, 2017 at 6:26 PM, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
wrote:
> Hi tao,
>
> we had unclean leader election enabled at the beginning. But then disabled
> it and also reduced 'max.poll.records'
" into your favorite search engine...
>
>
> -Matthias
>
>
> On 10/10/17 10:48 AM, Dmitriy Vsekhvalnov wrote:
> > Hi Matthias,
> >
> > thanks. Would you mind point me to correct Jira URL where i can file an
> > issue?
> >
> > Thanks again.
Hi Matthias,
thanks. Would you mind point me to correct Jira URL where i can file an
issue?
Thanks again.
On Tue, Oct 10, 2017 at 8:38 PM, Matthias J. Sax <matth...@confluent.io>
wrote:
> Yes, please file a Jira. We need to fix this. Thanks a lot!
>
> -Matthias
>
> On 10/1
Hi all,
still doing disaster testing with Kafka cluster, when crashing several
brokers at once sometimes we observe exception in kafka-stream app about
inability to create internal topics:
[WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager]
[Could not create internal
election turned on? If killing 100 is the only
> way to reproduce the problem, it is possible with unclean leader election
> turned on that leadership was transferred to out of ISR follower which may
> not have the latest high watermark
> On Sat, Oct 7, 2017 at 3:51 AM Dmitriy V
eed be a bit weird. have you checked
> offsets of your consumer - right after offsets jump back - does it start
> from the topic start or does it go back to some random position? Have you
> checked if all offsets are actually being committed by consumers?
>
> fre 6 okt. 2017 kl. 20:5
> replicas have the same consumer group offset written in it in it.
> >
> > On Fri, Oct 6, 2017 at 6:08 PM, Dmitriy Vsekhvalnov <
> > dvsekhval...@gmail.com>
> > wrote:
> >
> > > Stas:
> > >
> > > we rely on spring-kafka, it commits
nsumer gets this partition assigned during
> next rebalance it can commit old stale offset- can this be the case?
>
>
> fre 6 okt. 2017 kl. 17:59 skrev Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com
> >:
>
> > Reprocessing same events again - is fine for us (idempotent).
> > >
> > > On Fri, Oct 6, 2017 at 8:35 AM, Manikumar <manikumar.re...@gmail.com>
> > > wrote:
> > >
> > > > normally, log.retention.hours (168hrs) should be higher than
> > > > offsets.retention.minutes (336 hrs)?
> > > >
> > >
Hi Ted,
Broker: v0.11.0.0
Consumer:
kafka-clients v0.11.0.0
auto.offset.reset = earliest
On Fri, Oct 6, 2017 at 6:24 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> What's the value for auto.offset.reset ?
>
> Which release are you using ?
>
> Cheers
>
> On Fri, Oct
Hi all,
we several time faced situation where consumer-group started to re-consume
old events from beginning. Here is scenario:
1. x3 broker kafka cluster on top of x3 node zookeeper
2. RF=3 for all topics
3. log.retention.hours=168 and offsets.retention.minutes=20160
4. running sustainable load
Ok, we can try that.
Some other settings to try?
On Thu, Oct 5, 2017 at 20:42 Stas Chizhov <schiz...@gmail.com> wrote:
> I would set it to Integer.MAX_VALUE
>
> 2017-10-05 19:29 GMT+02:00 Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>:
>
> > I see, but produce
tml#non-streams-configuration-parameters
>
> 2017-10-05 19:12 GMT+02:00 Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>:
>
> > replication.factor set to match source topics. (3 in our case).
> >
> > what do you mean by retires? I don't see retries property in StreamConfi
perties?
>
> BR
>
> tors 5 okt. 2017 kl. 18:45 skrev Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com
> >:
>
> > Hi all,
> >
> > we were testing Kafka cluster outages by randomly crashing broker nodes
> (1
> > of 3 for instance) while still keeping majo
Hi all,
we were testing Kafka cluster outages by randomly crashing broker nodes (1
of 3 for instance) while still keeping majority of replicas available.
Time to time our kafka-stream app crashing with exception:
[ERROR] [StreamThread-1]
stop the old
> application and clean up it's state.
>
>
> -Matthias
>
> On 10/4/17 6:31 AM, Dmitriy Vsekhvalnov wrote:
> > Hi all,
> >
> > What is correct way to increase RF for existing internal topics that
> > kafka-streams create (re-partitioning streams
Hi all,
What is correct way to increase RF for existing internal topics that
kafka-streams create (re-partitioning streams)?
We are increasing RF for source topics and would like to align
kafka-streams as well. App part configuration is simple, but what to do
with existing internal topics?
set
> Commit interval makes everything blury anyways. If you can specify your
> pain more precisely maybe we can work around it.
>
> Best Jan
>
>
> On 10.07.2017 10:31, Dmitriy Vsekhvalnov wrote:
>
>> Guys, let me up this one again. Still looking for comments about
>>
Guys, let me up this one again. Still looking for comments about
kafka-consumer-groups.sh
tool.
Thank you.
On Fri, Jul 7, 2017 at 3:14 PM, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
wrote:
> I've tried 3 brokers on command line, like that:
>
> /usr/local/kafka/bin/kafka-con
th --bootstrap-server.
>
> Also, could you paste some results from the console printout?
>
> On 7 July 2017 at 12:47, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
> wrote:
>
> > Hi all,
> >
> > question about lag checking. We've tried to periodically sample cons
Hi all,
question about lag checking. We've tried to periodically sample consumer
lag with:
kafka-consumer-groups.sh --bootstrap-server broker:9092 --new-consumer
--group service-group --describe
it's all fine, but depending on host we run it from it gives different
results.
E.g:
- when
Thanks guys,
was exactly `offsets.retention.minutes`.
Figured out that `enable.auto.commit` was set to false in reality,
somewhere deep in spring properties and that's what have been causing
offsets removal when idle.
On Mon, Jul 3, 2017 at 7:04 PM, Dmitriy Vsekhvalnov <dvsekhval...@gmail.
for longer than
> > this period, then rebalanced, you will likely start consuming the
> messages
> > from the earliest offset again. I'd recommend setting this higher than
> the
> > default of 24 hours.
> >
> > Thanks,
> > Damian
> >
> > On M
Hi all,
looking for some explanations. We running 2 instances of consumer (same
consumer group) and getting little bit weird behavior after 3 days of
inactivity.
Env:
kafka broker 0.10.2.1
consumer java 0.10.2.1 + spring-kafka + enable.auto.commit = true (all
default settings).
Scenario:
1.
Thanks Damian !
That's was it, after fixing number compaction threads to be higher than 1,
it finally continue to consume stream.
On Fri, Jun 30, 2017 at 7:48 PM, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com
> wrote:
> Yeah, can confirm there is only 1 vCPU.
>
> Okay, will try tha
l#
> rocksdb-behavior-in-1-core-environments
> It seems like the same issue.
>
> Thanks,
> Damian
>
> On Fri, 30 Jun 2017 at 17:16 Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
> wrote:
>
> > Yes, StreamThread-1 #93 daemon is still at at org.rocksdb.RocksDB.
, 2017 at 6:33 PM, Damian Guy <damian@gmail.com> wrote:
> Yep, if you take another thread dump is it in the same spot?
> Which version of streams are you running? Are you using docker?
>
> Thanks,
> Damian
>
> On Fri, 30 Jun 2017 at 16:22 Dmitriy Vsekhvalnov <d
s much state to restore.
> It might be helpful if you take some thread dumps to see where it is
> blocked.
>
> Thanks,
> Damian
>
> On Fri, 30 Jun 2017 at 16:04 Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
> wrote:
>
> > Set org.apache.kafka.st
Set org.apache.kafka.streams to DEBUG.
Here is gist:
https://gist.github.com/dvsekhvalnov/b84b72349837f6c6394f1adfe18cdb61#file-debug-logs
On Fri, Jun 30, 2017 at 12:37 PM, Dmitriy Vsekhvalnov <
dvsekhval...@gmail.com> wrote:
> Sure, how to enable debug logs? Just adjust logba
t as well (so it’s reproducible),
> any chance you could start streaming with DEBUG logs on and collect those
> logs? I’m hoping something shows up there.
>
> Thanks,
> Eno
>
>
> > On Jun 28, 2017, at 5:30 PM, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
> wrote:
>
ll
>
> On Wed, Jun 28, 2017 at 9:51 AM, Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com
> > wrote:
>
> > Here are logs:
> >
> > app:
> > https://gist.github.com/dvsekhvalnov/f98afc3463f0c63b1722417e3710a8
> > e7#file-kafka-streams-log
> > broke
maybe broker and streams logs for the last 30 minutes up to
> the time the application stopped processing records.
>
> Thanks,
> Bill
>
> On Wed, Jun 28, 2017 at 9:04 AM, Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com
> > wrote:
>
> > Hi Bill,
> >
>
m happy to help, but I could use more information. Can you share the
> streams logs and broker logs?
>
> Have you confirmed messages are still being delivered to topics (via
> console consumer)?
>
> Thanks,
> Bill
>
> On Wed, Jun 28, 2017 at 8:24 AM, Dmitriy Vsekhvalnov <
Hi all,
looking for some assistance in debugging kafka-streams application.
Kafka broker 0.10.2.1 - x3 Node cluster
kafka-streams 0.10.2.1 - x2 application nodes x 1 stream thread each.
In streams configuration only:
- SSL transport
- kafka.streams.commitIntervalMs set to 5000 (instead of
tation:
>
> https://github.com/apache/kafka/blob/trunk/clients/src/
> main/java/org/apache/kafka/common/record/DefaultRecord.java#L341
>
> Hope this helps.
>
> Out of curiosity, which clients do this differently?
>
> Ismael
>
> On Tue, May 30, 2017 at 8:30 AM,
Hi all,
we noticed that when kafka broker configured with:
log.message.timestamp.type=LogAppendTime
to timestamp incoming messages on its own and producer is configured to use
any kind of compression. What we end up on the wire for consumer:
- outer compressed envelope - LogAppendTime, by
55 matches
Mail list logo