+1 (non-binding)
Thanks.
--Vahid
From: Gwen Shapira
To: "d...@kafka.apache.org" , Users
Date: 06/05/2017 09:38 PM
Subject:[VOTE] KIP-162: Enable topic deletion by default
Hi,
The discussion has been quite positive, so I posted a JIRA, a PR and
updated the KIP with the lat
Hi,
The discussion has been quite positive, so I posted a JIRA, a PR and
updated the KIP with the latest decisions.
Lets officially vote on the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default
JIRA is here: https://issues.apache.org/jira/browse/KA
Hi everyone, me again :)
I'm still trying to implement my "remoting" layer that allows
my clients to see the partitioned Kafka Streams state
regardless of which instance they hit. Roughly, my lookup is:
Message get(Key key) {
RemoteInstance instance = selectPartition(key);
return instanc
Hi,
We've run into a NullPointerException that looks like it might be related
to https://issues.apache.org/jira/browse/KAFKA-5141
Here's the Stacktrace:
java.lang.NullPointerException
[2017-06-02 15:55:51,454] ERROR Failed to start task
sts-sink-connector-0 (org.apache.kafka.connect.runtime.Wor
Should go to dev list, too.
Forwarded Message
Subject: Re: [DISCUSS]: KIP-161: streams record processing exception
handlers
Date: Mon, 5 Jun 2017 19:19:42 +0200
From: Jan Filipiak
Reply-To: users@kafka.apache.org
To: users@kafka.apache.org
Hi
just my few thoughts
On 05.06.20
@Jan: EOS will be turned off by default in 0.11. I assume, we might
enable it by default in a later release but the will be always a config
to disable it.
-Matthias
On 6/5/17 10:19 AM, Jan Filipiak wrote:
> Hi
>
> just my few thoughts
>
> On 05.06.2017 11:44, Eno Thereska wrote:
>> Hi there,
>
Frank,
If you use "now", I assume you are calling System.currentTimeMillis().
If yes, you can also use predefined WallclockTimestampExtractor that
ships with Streams (no need to write your own one).
> I thought that the Timestamp extractor would then also use
>> that updated timestamp as 'stream
Hi
just my few thoughts
On 05.06.2017 11:44, Eno Thereska wrote:
Hi there,
Sorry for the late reply, I was out this past week. Looks like good progress
was made with the discussions either way. Let me recap a couple of points I saw
into one big reply:
1. Jan mentioned CRC errors. I think th
Hi,
I am using Kafka v.0.10.0.1 and I want to enable compression for specific
topics as indicated in documentation
https://cwiki.apache.org/confluence/display/KAFKA/Compression ; as long as
I tested it seems that compression per topic is only valid for v.0.7.0 and
not supported in newer versions.
Hi,
I am using Kafka v.0.10.0.1 and I want to enable compression for specific
topics as indicated in documentation https://cwiki.apache.org/
confluence/display/KAFKA/Compression ; as long as I tested it seems that
compression per topic is only valid for v.0.7.0 and not supported in newer
versions.
Thank you all for your inputs, so going by the opinions I guess its a lot
to do with use case and how the clusters will evolve over time. Will keep
these in mind. Thanks !
On Sun, Jun 4, 2017 at 10:13 AM, Michal Borowiecki <
michal.borowie...@openbet.com> wrote:
> We are indeed running this setu
Hi Kafka Users,
I am trying to setup a Simple Authentication Mechanism for my Kafka
Instance running on my Virtual Box VM.
I am facing a lot of difficulty in starting the Zookeeper 3.4.10.
The scenario is like this...
I have a single admin User called sharjosh who's starting both the
Zookeeper an
Thanks Guozhang,
I figured I could use a custom timestamp extractor, and set that timestamp
to 'now' when reading a source topic, as the original timestamp is pretty
much irrelevant. I thought that the Timestamp extractor would then also use
that updated timestamp as 'stream time', but I don't rea
Hi there,
Sorry for the late reply, I was out this past week. Looks like good progress
was made with the discussions either way. Let me recap a couple of points I saw
into one big reply:
1. Jan mentioned CRC errors. I think this is a good point. As these happen in
Kafka, before Kafka Streams g
Is there any work-around for this? How can we leverage the auto-cleanup
without taking the server down?
KR,
On 1 June 2017 at 15:46, Mohammed Manna wrote:
> Sorry for bugging everyone, but does anyone have any workaround which has
> been implemented successfully? I am assuming that it's a simpl
Hi,
I setup a fresh cluster (3-brokers, 3-keepers) and created a topic
according to your settings - obviously the log directories are kept
separeate e.g. (var/lib/zookeeper2 and var/lib/zookeeper3) and not to
mention, the myid files for every zookeeper to identify themselves in the
ensemble. Canno
16 matches
Mail list logo