Hi Andras,
thank for your response !
For log.flush.offset.checkpoint.interval.ms we write out only one recovery
point for all logs ?
But if I have 3 partitions, and for each partition the offset is different,
what's happen ? We save in
text file 3 different offset ? Or just only one for the
Hi Waleed,
generally extra work is necessary only when the client uses a different
message format version than what is used in the broker log. Then the broker
has to convert between those formats.
In case of 0.8 and 0.9 there is no difference in the message format: both
use version 0.
Best regard
Hi Adrien,
Every log.flush.offset.checkpoint.interval.ms we write out the current
recovery point for all logs to a text file in the log directory to avoid
recovering the whole log on startup.
and every log.flush.start.offset.checkpoint.interval.ms we write out the
current log start offset for al
Thanks a lot Soenke,
Your explanation make a lot of sense.
On Mon, Feb 26, 2018 at 10:05 PM Sönke Liebau
wrote:
> Hi Reema, hi Naresh,
>
> I'll try and answer both your questions together by expanding on the
> topic a bit. Also, rereading my message I realize, that I phrased that
> somewhat am
Hi Reema, hi Naresh,
I'll try and answer both your questions together by expanding on the
topic a bit. Also, rereading my message I realize, that I phrased that
somewhat ambiguously, since a few of the terms in there are
overloaded.
First of, if you are using the java consumer or producer (which
It should require zookeeper connection always, because intern kafka brokers
will interact with zookeeper for all meta data about topics.
But its interesting, how would you give departments to access to kafka nodes
@Sönke,
Could you please shade some light on giving departements access to kafka
no
Hello Joseph,
As its fixed version indicates, it has been fixed in the up coming 1.0.1
release. It is expected to be released very soon.
Guozhang
On Mon, Feb 26, 2018 at 11:23 AM, Joseph Ziegler <
josephziegler2...@gmail.com> wrote:
> Hello All,
>
> Is https://issues.apache.org/jira/browse/KAF
Hi Sönke,
Thanks for the info, it is helpful!
I can modify so that the departments can only access the Kafka nodes
themselves. However how would the consumers connect to the topics then? Don't
the consumer clients require to connect via Zookeeper?
Thanks,
Reema
On Fri, Feb 23, 2018 at 10:50 P
+1 (non-binding)
Built the source and ran quickstart (including streams) successfully on
Ubuntu (with both Java 8 and Java 9).
I understand the Windows platform is not officially supported, but I ran
the same on Windows 10, and except for Step 7 (Connect) everything else
worked fine.
There ar
Hello All,
Is https://issues.apache.org/jira/browse/KAFKA-6185 still an issue? Are
there still memory leak issues or can we move forward with installing
version 1.0.0 without running into this bug? What is the status on this?
Thanks
On 2018/01/05 19:58:36, Brett Rann wrote:
> Is there a plan/et
I don’t think this is a terrible idea. It’s really the only way to know what
events were before and what events were after an event across all partitions.
> On Feb 26, 2018, at 11:18 AM, Ryan Worsley wrote:
>
> Hey everyone,
>
> I believe I have a use-case for writing a control message periodi
Hey everyone,
I believe I have a use-case for writing a control message periodically and
transactionally to ALL partitions of a topic (so that it's totally ordered
with respect to events).
Does anyone know if this is a terrible idea? Are there any best practices
with doing it? Is there a better
Hello all,
I have read linked porperties documentation, but I don't really understand the
difference between:
log.flush.offset.checkpoint.interval.ms
and
log.flush.start.offset.checkpoint.interval.ms
Do you have a usecase of each property's utilization, I can't figure out what
the differ
Hi all,
This might be a very naive question and may even be directed to zookeeper
more but its specific to kafka and thus i am asking here.
how does kafka brokers know about the new zookeeper leader server selected
after the original one is down, how is this message propagated to kafka
brokers, s
14 matches
Mail list logo