Hi Lis,
Yes, they are free software. The full terms of the licenses are
available here: https://github.com/apache/kafka/blob/trunk/LICENSE and
here: https://github.com/apache/zookeeper/blob/master/LICENSE.txt
Mathieu
On Tue, May 23, 2017 at 5:54 AM, LISBETH SANTAMARIA GUTIERREZ
Hi Fathima,
Setting "retries=0" on the producer means that an attempt to produce a
message, if it encounters an error, will result in that message being
lost. It's likely the producer will encounter intermittent errors when you
kill one broker in the cluster.
I'd suggest trying this test with a
+1 (non-binding)
Upgraded KS & KC applications to 0.10.2.1 RC1, successfully ran
application-level acceptance tests.
Mathieu
On Wed, Apr 12, 2017 at 6:25 PM, Gwen Shapira wrote:
> Hello Kafka users, developers, client-developers, friends, romans,
> citizens, etc,
>
> This
Hi Gwen,
+1, looks good to me. Tested broker upgrades, and connect & streams
applications.
Mathieu
On Fri, Apr 7, 2017 at 6:12 PM, Gwen Shapira wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for the release of Apache Kafka
Hey Kafka Users,
When using a Kafka broker w/ auto.create.topics.enable set to true, how do
Kafka users generally manage configuration of those topics? In
particular, cleanup.policy={compact/delete} can be a crucial configuration
value to get correct.
In my application, I have a couple Kafka
.4 <https://github.com/facebook/rocksdb/releases/tag/v5.1.4>
>
> only while the latest release is 5.1.2.
>
>
> Guozhang
>
>
> On Fri, Mar 17, 2017 at 10:27 AM, Mathieu Fenniak <
> mathieu.fenn...@replicon.com> wrote:
>
> > Hey all,
> >
>
Hey all,
So... what does it mean to have a RocksDBException with a message that just
has a single character? "e", "q", "]"... I've seen a few. Has anyone seen
this before?
Two example exceptions:
https://gist.github.com/mfenniak/c56beb6d5058e2b21df0309aea224f12
Kafka Streams 0.10.2.0. Both
Hey Petr,
I have the same issue. But I just cope with it; I wire up default
dependencies directly in the connector and task constructors, expose them
through properties, and modify them to refer to mocks in my unit tests.
It's not a great approach, but it is simple.
Why KConnect does take
ng infrastructure issue are you referring to?
>
> BTW there is some related issue for the commit failed exception that has
> been fixed in trunk post 0.10.2:
> https://github.com/apache/kafka/commit/4db048d61206bc6efbd14
> 3d6293216b7cb4b86c5
>
>
> Guozhang
>
>
>
int that they're not assigned to "me" is accurate.
Mathieu
On Fri, Mar 10, 2017 at 11:15 AM, Mathieu Fenniak <
mathieu.fenn...@replicon.com> wrote:
> Hey Kafka Users,
>
> I've been observing a few instances of CommitFailedException (group has
> already rebalanced)
Hey Kafka Users,
I've been observing a few instances of CommitFailedException (group has
already rebalanced) that seem to happen well-within max.poll.interval.ms
since the last commit. In at least one specific case that I've looked at,
between the last successful commit and the failed commit,
Hi Vishnu,
I'd suggest you take a look at the broker configuration value
"offsets.retention.minutes".
The consumer offsets are stored in the __consumer_offsets topic.
__consumer_offsets is a compacted topic (cleanup.policy=compact), where
the key is the combination of the consumer group, the
Hi Jun,
I ran into the same question today (see thread, subject: Consumer / Streams
causes deletes in __consumer_offsets?), and here's what Eno and Guozhang
helped me understand:
There are broker-level configuration values called
"offsets.retention.minutes" and
>
> Guozhang
>
>
> On Wed, Feb 22, 2017 at 8:41 AM, Mathieu Fenniak <
> mathieu.fenn...@replicon.com> wrote:
>
> > Hi Eno,
> >
> > Thanks for the quick reply. I think that probably does match the data
> I'm
> > seeing. This surprises me a bit b
nsumer-group>
>
> Thanks
> Eno
>
> > On 22 Feb 2017, at 16:08, Mathieu Fenniak <mathieu.fenn...@replicon.com>
> wrote:
> >
> > Hey users,
> >
> > What causes delete tombstones (value=null) to be sent to the
> > __consumer_offsets topic?
> >
Hey users,
What causes delete tombstones (value=null) to be sent to the
__consumer_offsets topic?
I'm observing that a Kafka Streams application that is restarted after a
crash appears to be reprocessing messages from the beginning of a topic.
I've dumped the __consumer_offsets topic and found
On Wed, Feb 15, 2017 at 5:04 PM, Matthias J. Sax
wrote:
> - We also removed method #topologyBuilder() from KStreamBuilder because
> we think #transform() should provide all functionality you need to
> mix-an-match Processor API and DSL. If there is any further concern
>
+1 (non-binding)
Still looks as good as RC0 did for my streams workload. :-)
Mathieu
On Wed, Feb 15, 2017 at 1:23 PM, Magnus Edenhill wrote:
> Verified with librdkafka v0.9.4-RC1.
>
> 2017-02-15 9:18 GMT-08:00 Tom Crayford :
>
> > Heroku tested this
On Tue, Feb 14, 2017 at 9:37 AM, Damian Guy wrote:
> > And about printing the topology for debuggability: I agrees this is a
> > > potential drawback, and I'd suggest maintain some functionality to
> build
> > a
> > > "dry topology" as Mathieu suggested; the difficulty is
On Tue, Feb 14, 2017 at 1:14 AM, Guozhang Wang wrote:
> Some thoughts on the mixture usage of DSL / PAPI:
>
> There were some suggestions on mixing the usage of DSL and PAPI:
> https://issues.apache.org/jira/browse/KAFKA-3455, and after thinking it a
> bit more carefully, I'd
Hi Adam,
If you increase the number of partitions in the topic "topic1" after the
state store is created, you'd need to manually increase the number of
partitions in the "app1-store1-changelog" topic as well. Or remove the
topic and let KS recreate it next run. But, either way, hopefully you
, yada yada).
Mathieu
On Fri, Feb 10, 2017 at 2:22 PM, Steven Schlansker <
sschlans...@opentable.com> wrote:
>
> > On Feb 10, 2017, at 1:09 PM, Mathieu Fenniak <
> mathieu.fenn...@replicon.com> wrote:
> >
> > Hey Steven,
> >
> > If you have
Hey Steven,
If you have one KStream, and you want to produce to a topic that is read by
another KStream, you'd use the ".through" method of the first KStream.
".through" both outputs to a topic and returns a KStream that reads from
that topic. (".to" just outputs to a topic)
If you want to
Hi Sachin,
Streams apps can be configured with a rocksdb.config.setter, which is a
class name that needs to implement
the org.apache.kafka.streams.state.RocksDBConfigSetter interface, which can
be used to reduce the memory utilization of RockDB. Here's an example
class that trims it way down
thias J. Sax <matth...@confluent.io>
> > wrote:
> >
> >> Yes, that is correct.
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 2/7/17 6:39 PM, Mathieu Fenniak wrote:
> >>> Hey kafka users,
> >>&
Hey kafka users,
Is it correct that a Kafka topic that is used for a KTable should be set to
cleanup.policy=compact?
I've never noticed until today that the KStreamBuilder#table()
documentation says: "However, no internal changelog topic is created since
the original input topic can be used for
On Mon, Feb 6, 2017 at 2:35 PM, Matthias J. Sax
wrote:
> - adding KStreamBuilder#topologyBuilder() seems like be a good idea to
> address any concern with limited access to TopologyBuilder and DSL/PAPI
> mix-and-match approach. However, we should try to cover as much as
>
Hi Matthias,
I use a few of the methods that you're pointing out that will be deprecated
and don't have an apparent alternative, so I wanted to just let you know
what they are and what my use-cases are for them.
First of all, I use a combination of DSL and PAPI in the same application
very
ring rebalances and shutdown.
> There is a good chance that your closing of the stores is causing the
> issue. Of course if you see the exception again then please report back so
> we can investigate further.
>
> Thanks,
> Damian
>
> On Thu, 2 Feb 2017 at 16:12 Mathieu Fenniak
ns. There are some ideas about how to avoid this, but
> nothing concrete on the roadmap yet.
>
> -Ewen
>
> On Fri, Jan 13, 2017 at 10:32 AM, Mathieu Fenniak <
> mathieu.fenn...@replicon.com> wrote:
>
> > Hey kafka-users,
> >
> > Is it normal for a Kafka Conn
Hey kafka-users,
Is it normal for a Kafka Connect source connector that
calls requestTaskReconfiguration to cause all the connectors on the
kafka-connect distributed system to be stopped and started?
One of my three connectors (2x source, 1x sink) runs a background thread
that will occasionally
Hi Kafka Users,
I'm looking for a bit of clarification on the documentation for
implementing a SourceTask. I'm reading a replication stream from a
database in my SourceTask, and I'd like to use commit or commitRecord to
advance the other system's replication stream pointer so that it knows I
Hi Sachin,
Some quick answers, and a link to some documentation to read more:
- If you restart the application, it will start from the point it crashed
(possibly reprocessing a small window of records).
- You can run more than one instance of the application. They'll
coordinate by virtue of
8:36 AM, Damian Guy <damian@gmail.com> wrote:
> Hi Mathieu,
>
> I'm trying to make sense of the rather long stack trace in the gist you
> provided. Can you possibly share your streams topology with us?
>
> Thanks,
> Damian
>
> On Mon, 5 Dec 2016 at 14:14 Ma
streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG,
> 0);
> )
>
> Thanks
> Eno
> > On 4 Dec 2016, at 03:47, Mathieu Fenniak <mathieu.fenn...@replicon.com>
> wrote:
> >
> > Hey all,
> >
> > I've just been running a quick test of my kafka-s
Hi Jon,
Here are some lag monitoring options that are external to the consumer
application itself; I don't know if these will be appropriate for you. You
can use a command-line tool like kafka-consumer-groups.sh to monitor
consumer group lag externally (
Hey all,
I've just been running a quick test of my kafka-streams application on the
latest Kafka trunk (@e43bbce), and came across this error. I was wondering
if anyone has seen this error before, have any thoughts on what might cause
it, or can suggest a direction to investigate it further.
I'd like to quickly reinforce Frank's opinion regarding the rocksdb memory
usage. I was also surprised by the amount of non-JVM-heap memory being
used and had to tune the 100 MB default down considerably. It's also
unfortunate that it's hard to estimate the memory requirements for a KS app
Hey Apache Users,
I'm working on a web application that has a web service component, and a
background processor component. Both applications will send messages to
the same Kafka topic as an object is manipulated.
In some cases, a web service call in the service component will send a
message to
Hey Ali,
If you have auto create turned on, which it sounds like you do, and you're
happy with using the broker's configured partition count and replication
factor, then you can call "partitionsFor(String topic)" on your producer.
This will create the topic without sending a message to it. I'm
ge queries on windows, etc. Details can be found in thie KIP
> (we are working on more docs / blog posts at the time):
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-67%3A+Queryable+state+for+Kafka+Streams
>
> Guozhang
>
>
> On Thu, Aug 18, 2016 at 6:40 AM, Mathi
Oh, that's interesting. It looks like the Gradle Wrapper's jar file
was intentionally removed from the kafka source tree
(https://issues.apache.org/jira/browse/KAFKA-2098), which would cause
this error. The README file in the repo says to run "gradle" first,
which will install the wrapper
Hello again, kafka-users,
When I aggregate a KTable, a future input that updates a KTable's
value for a specific key causes the aggregate's subtractor to be
invoked, and then its adder. This part is great, completely
as-expected.
But what I didn't expect is that the intermediate result of the
rios with
> things
> > like Flink, Spark and custom apps.
> > What would stop you from taking the same approach?
> >
> > –
> > Best regards,
> > Radek Gruchalski
> > ra...@gruchalski.com
> >
> >
> > On August 15, 2016 at 9:41:37 PM, M
is successfully to test publis / consume scenarios with things
> like Flink, Spark and custom apps.
> What would stop you from taking the same approach?
>
> –
> Best regards,
> Radek Gruchalski
> ra...@gruchalski.com
>
>
> On August 15, 2016 at 9:41:37 PM, Mathieu Fe
Hey Martin,
I had to modify the -G argument to that command to include the visual
studio year. If you run "cmake /?", it will output all the available
generators. My cmake looked like:
cmake -G "Visual Studio 12 2013 Win64" -DJNI=1 ..
I think this is probably a change in cmake since the
Hi Martin,
rocksdb does not currently distribute a Windows-compatible build of their
rocksdbjni library. I recently wrote up some instructions on how to
produce a local build, which you can find here:
" test Kafka cluster).
>
> -Michael
>
>
>
>
>
> On Mon, Aug 15, 2016 at 3:44 PM, Mathieu Fenniak <
> mathieu.fenn...@replicon.com> wrote:
>
> > Hey all,
> >
> > At my workplace, we have a real focus on software automated testing. I'd
>
Hey all,
At my workplace, we have a real focus on software automated testing. I'd
love to be able to test the composition of a TopologyBuilder with
org.apache.kafka.test.ProcessorTopologyTestDriver
...@gmail.com> wrote:
> Hi Mathieu,
>
> I have a PR against 0.10.0 branch to backport the bug fix plus some
> refactoring, feel free to try it out:
>
>
> https://github.com/apache/kafka/pull/1735
>
>
> Guozhang
>
> On Wed, Aug 10, 2016 at 2:28 PM, Mat
to
> 0.10.0 and have another bug fix release.
>
>
> Guozhang
>
> On Wed, Aug 10, 2016 at 7:32 AM, Mathieu Fenniak <
> mathieu.fenn...@replicon.com> wrote:
>
> > Hey there, Kafka Users,
> >
> > I'm trying to join two topics with Kafka Streams. The first
Hey there, Kafka Users,
I'm trying to join two topics with Kafka Streams. The first topic is a
changelog of one object, and the second is a changelog of a related
object. In order to join these tables, I'm grouping the second table by a
piece of data in it that indicates what record it is
n in-memory store (included with
> Kafka Streams) for development purposes. In that scenario RocksDb would not
> be needed.
>
> Eno
>
>
> > On 4 Aug 2016, at 16:14, Mathieu Fenniak <mathieu.fenn...@replicon.com>
> wrote:
> >
> > Hey all,
> >
> >
Hey all,
Is it anyone developing Kafka Streams applications on Windows?
It seems like the RocksDB Java library doesn't include a native JNI library
for Windows, which prevents a Kafka Streams app from running on Windows. I
was just wondering if others have run into this, and if so, what
Hello again, Kafka users,
My end goal is to get stream-processed data into a PostgreSQL database.
I really like the architecture that Kafka Streams takes; it's "just" a
library, I can build a normal Java application around it and deal with
configuration and orchestration myself. To persist my
gt; >
> > On Wed, Jul 20, 2016 at 10:58 AM, Mathieu Fenniak <
> > mathieu.fenn...@replicon.com> wrote:
> >
> >> Hi Guozhang,
> >>
> >> Yes, I tried to apply the filter on the KTable that came from join, and
> >> then the foreach on the KTable
; and will keep you updated.
>
>
> As for KTable.filter(), I think it can actually achieve want you want: not
> forwarding nulls to the downstream operators; have you tried it out but
> find it is not working?
>
>
> Guozhang
>
>
>
> On Wed, Jul 20, 2016 at 7:42 AM, Mathieu
containing the current state
> of the join.
>
> This happens both ways, thus, if your first records of each stream do
> not match on the key, both result in a message to delete
> possible existing join-tuples in the result KTable.
>
> Does this make sense to you?
>
> -Matthias
>
and probably only log4j12 as test dependency. Similarly to
> Kafka Clients and Connect:
>
>
> org.slf4j
> slf4j-log4j12
> 1.7.21
> test
>
>
> org.slf4j
> slf4j-api
> 1.7.21
> compile
>
>
>
> Guozhang
>
>
> On Tue, Jul 19, 2016 at 10:39
) by excluding this
dependency in my build.gradle, as below, but I wanted to ask if this
dependency is a mistake, or intentional?
compile(group: 'org.apache.kafka', name: 'kafka-streams', version:
'0.10.0.0') {
exclude module: 'slf4j-log4j12'
}
Thanks,
Mathieu Fenniak
60 matches
Mail list logo