I have a sneaky suspicion this is out of scope for Kafka Streams, however I
thought it wouldn't hurt to ask... I'm trying to implement a temperature
monitoring system. Kafka Streams seems great for doing that. The one
scenario that I'm not able to cover, however, is detecting when a
temperature
Do you have unclean leader election turned on? If killing 100 is the only
way to reproduce the problem, it is possible with unclean leader election
turned on that leadership was transferred to out of ISR follower which may
not have the latest high watermark
On Sat, Oct 7, 2017 at 3:51 AM Dmitriy
Hi Stas,
Would you mind creating a JIRA for this functionality request so that we
won't forget about it and drop on the floor?
Guozhang
On Fri, Oct 6, 2017 at 1:10 PM, Stas Chizhov wrote:
> Thank you!
>
> I guess eventually consistent reads might be a reasonable trade off
Ok I see. Thanks again!
fre 6 okt. 2017 kl. 22:13 skrev Matthias J. Sax :
> >> I guess eventually consistent reads might be a reasonable trade off if
> you
> >> can get ability to serve reads without downtime in some cases.
>
> Agreed :)
>
> >> By the way standby replicas
About to verify hypothesis on monday, but looks like that in latest tests.
Need to double check.
On Fri, Oct 6, 2017 at 11:25 PM, Stas Chizhov wrote:
> So no matter in what sequence you shutdown brokers it is only 1 that causes
> the major problem? That would indeed be a bit
So no matter in what sequence you shutdown brokers it is only 1 that causes
the major problem? That would indeed be a bit weird. have you checked
offsets of your consumer - right after offsets jump back - does it start
from the topic start or does it go back to some random position? Have you
>> I guess eventually consistent reads might be a reasonable trade off if you
>> can get ability to serve reads without downtime in some cases.
Agreed :)
>> By the way standby replicas are just extra consumers/processors of input
>> topics? Or is there some custom protocol for sinking the
Thank you!
I guess eventually consistent reads might be a reasonable trade off if you
can get ability to serve reads without downtime in some cases.
By the way standby replicas are just extra consumers/processors of input
topics? Or is there some custom protocol for sinking the state?
fre 6
Also voting in favor of reactive-kafka. Should fit nicely into your akka app.
Jozef
Sent from [ProtonMail](https://protonmail.ch), encrypted email based in
Switzerland.
> Original Message
> Subject: Re: Scala API
> Local Time: October 6, 2017 8:09 PM
> UTC Time: October 6,
Yeah, probably we can dig around.
One more observation, the most lag/re-consumption trouble happening when we
kill broker with lowest id (e.g. 100 from [100,101,102]).
When crashing other brokers - there is nothing special happening, lag
growing little bit but nothing crazy (e.g. thousands, not
Ted: when choosing earliest/latest you are saying: if it happens that there
is no "valid" offset committed for a consumer (for whatever reason:
bug/misconfiguration/no luck) it will be ok to start from the beginning or
end of the topic. So if you are not ok with that you should choose none.
Hi .
IMO reactive-kafka gives a very nice api for streams . However if you want
an alternative for using steams, you can try
Scala-kafka-client http://www.cakesolutions.net/teamblogs/
getting-started-with-kafka-using-scala-kafka-client-and-akka which doesn't
use streams but gives nice integration
Setting topic policy to "compact,delete" should be sufficient. Cf.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-71%3A+Enable+log+compaction+and+deletion+to+co-exist
Note: retention time is not based on wall-clock time, but embedded
record timestamps. Thus, old messages get only deleted
No, that is not possible.
Note: standby replicas might "lag" behind the active store, and thus,
you would get different results if querying standby replicas would be
supported.
We might add this functionality at some point though -- but there are no
concrete plans atm. Contributions are always
Hi,
No that isn't supported.
Thanks,
Damian
On Fri, 6 Oct 2017 at 04:18 Stas Chizhov wrote:
> Hi
>
> Is there a way to serve read read requests from standby replicas?
> StreamsMeatadata does not seem to provide standby end points as far as I
> can see.
>
> Thank you,
>
The new clients (producer/consumer/admin) as well as Connect and Streams
API are only available in Java.
You can use Streams API with Scala though. There is one thing you need
to consider:
A brief search brought me to related discussion on this JIRA:
https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15906349=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15906349
FYI
On Fri, Oct 6, 2017 at 10:37 AM, Manikumar
@Ted Yes, I think we should add log warning message.
On Fri, Oct 6, 2017 at 9:50 PM, Vincent Dautremont <
vincent.dautrem...@olamobile.com.invalid> wrote:
> is there a way to read messages on a topic partition from a specific node
> we that we choose (and not by the topic partition leader) ?
>
Well as far as I remember this was an old issue with 0.10.0.x or something.
In 0.10.2.x librocksdbjni dll is part of rocksdbjni-5.0.1.jar so there is
no need to build separately for windows.
Did something got changed in 0.11.x ?
Thanks
Sachin
On Fri, Oct 6, 2017 at 10:00 PM, Ted Yu
I assume you have read
https://github.com/facebook/rocksdb/wiki/Building-on-Windows
Please also see https://github.com/facebook/rocksdb/issues/2531
BTW your question should be directed to rocksdb forum.
On Fri, Oct 6, 2017 at 6:39 AM, Valentin Forst wrote:
> Hi there,
>
>
is there a way to read messages on a topic partition from a specific node
we that we choose (and not by the topic partition leader) ?
I would like to read myself that each of the __consumer_offsets partition
replicas have the same consumer group offset written in it in it.
On Fri, Oct 6, 2017 at
Stas:
we rely on spring-kafka, it commits offsets "manually" for us after event
handler completed. So it's kind of automatic once there is constant stream
of events (no idle time, which is true for us). Though it's not what pure
kafka-client calls "automatic" (flush commits at fixed intervals).
You don't have autocmmit enables that means you commit offsets yourself -
correct? If you store them per partition somewhere and fail to clean it up
upon rebalance next time the consumer gets this partition assigned during
next rebalance it can commit old stale offset- can this be the case?
fre
Reprocessing same events again - is fine for us (idempotent). While loosing
data is more critical.
What are reasons of such behaviour? Consumers are never idle, always
commiting, probably something wrong with broker setup then?
On Fri, Oct 6, 2017 at 6:58 PM, Ted Yu wrote:
Stas:
bq. using anything but none is not really an option
If you have time, can you explain a bit more ?
Thanks
On Fri, Oct 6, 2017 at 8:55 AM, Stas Chizhov wrote:
> If you set auto.offset.reset to none next time it happens you will be in
> much better position to find
If you set auto.offset.reset to none next time it happens you will be in
much better position to find out what happens. Also in general with current
semantics of offset reset policy IMO using anything but none is not really
an option unless it is ok for consumer to loose some data (latest) or
Should Kafka log warning if log.retention.hours is lower than number of
hours specified by offsets.retention.minutes ?
On Fri, Oct 6, 2017 at 8:35 AM, Manikumar wrote:
> normally, log.retention.hours (168hrs) should be higher than
> offsets.retention.minutes (336
Hi,
I'm having the same setup as Dimitry, I've experienced exactly the same
issue already 2 times this last month.
(the only difference with Dimitry's setup is that I have librdkafka 0.9.5
clients.
It's like if the __consumer_offsets partitions were not synced but still
reported as synced (and so
normally, log.retention.hours (168hrs) should be higher than
offsets.retention.minutes (336 hrs)?
On Fri, Oct 6, 2017 at 8:58 PM, Dmitriy Vsekhvalnov
wrote:
> Hi Ted,
>
> Broker: v0.11.0.0
>
> Consumer:
> kafka-clients v0.11.0.0
> auto.offset.reset = earliest
>
>
>
>
Hi Ted,
Broker: v0.11.0.0
Consumer:
kafka-clients v0.11.0.0
auto.offset.reset = earliest
On Fri, Oct 6, 2017 at 6:24 PM, Ted Yu wrote:
> What's the value for auto.offset.reset ?
>
> Which release are you using ?
>
> Cheers
>
> On Fri, Oct 6, 2017 at 7:52 AM, Dmitriy
What's the value for auto.offset.reset ?
Which release are you using ?
Cheers
On Fri, Oct 6, 2017 at 7:52 AM, Dmitriy Vsekhvalnov
wrote:
> Hi all,
>
> we several time faced situation where consumer-group started to re-consume
> old events from beginning. Here is
Graph image didn't come through.
Consider using third party site for hosting image.
On Fri, Oct 6, 2017 at 6:46 AM, Alexander Petrovsky
wrote:
> Hello!
>
> I observe the follow strange behavior in my kafka graphs:
>
>
> As you can see, the topic __consumer_offsets have very
Hello
I understand that the compacted topic is meant to keep at least the latest
key value pair.
However, I am having an issue since it can happen that entry becomes old
and I need to remove it. It may also occur, that I am not able to send key
"null" pair. So I need another method to remove my
Hello!
I observe the follow strange behavior in my kafka graphs:
As you can see, the topic __consumer_offsets have very bit rate, is it
okay? Or I should find consumers that using ConsumerGroups and fix some
parameters? And what parameters I should fix?
--
Петровский Александр / Alexander
Hi all,
we several time faced situation where consumer-group started to re-consume
old events from beginning. Here is scenario:
1. x3 broker kafka cluster on top of x3 node zookeeper
2. RF=3 for all topics
3. log.retention.hours=168 and offsets.retention.minutes=20160
4. running sustainable load
Hi there,
We have Kafka 0.11.0.0 running to DC/OS. So, I am developing on windows (it's
not my fault ;-)) and have to ship to a Linux-Container (DC/OS) in order to run
a java-app.
What is the best way to use Kafka-streams maven dependency w.r.t. RocksDB in
order to work on both OSs?
Currently
Hi
Is there a way to serve read read requests from standby replicas?
StreamsMeatadata does not seem to provide standby end points as far as I
can see.
Thank you,
Stas
Thank you.
I would have expected for the topic to be created either by the producer or
consumer, as it is a bit indeterministic, whether the consumer or producer
would come up first.
Kind regards
On Fri, Oct 6, 2017 at 12:53 PM, Michal Michalski <
michal.michal...@zalando.ie> wrote:
> Hey
Hey Josh,
Consumption from non-existent topic will end up with "LEADER_NOT_AVAILABLE".
However (!) I just tested it locally (Kafka 0.11) and it seems like
consuming from a topic that doesn't exist with auto.create.topics.enable
set to true *will create it* as well (I'm checking it in Zookeeper's
39 matches
Mail list logo