, May 14, 2019 at 4:44 AM Claudia Wegmann wrote:
> Dear kafka users,
>
> I run a kafka cluster (version 2.1.1) with 6 brokers to process ~100
> messages per second with a number of kafka streams apps. There are
> currently 53 topics with 30 partitions each. I have exactly onc
Dear kafka users,
I run a kafka cluster (version 2.1.1) with 6 brokers to process ~100 messages
per second with a number of kafka streams apps. There are currently 53 topics
with 30 partitions each. I have exactly once processing enabled. My problem is
that the __consumer_offsets topic is
on rolled out segments
* deletion of tombstone only occurs if the delete.retention.ms delay is expired
Best regards
On Fri, Mar 29, 2019 at 2:16 PM Claudia Wegmann wrote:
> Hey there,
>
> I've got the problem that the "__consumer_offsets" topic grows pretty
> big over ti
Hey there,
I've got the problem that the "__consumer_offsets" topic grows pretty big over
time. After some digging, I found offsets for consumer groups that were deleted
a long time ago still being present in the topic. Many of them are offsets for
console consumers, that have been deleted
Hi kafka users,
since upgrading to kafka 2.1.1 version I get the following log message at every
startup of streaming services:
"No checkpoint found for task 0_16 state store TestStore changelog
test-service-TestStore-changelog-16 with EOS turned on. Reinitializing the task
and restore its
Dear kafka experts,
I've got a kafka cluster with 3 brokers running in docker-containers on
different hosts in version 2.1.1 of kafka. The cluster is serving some kafka
streams apps. The topics are configured with replication.factor 3 and
min.insync.replicas 2. The cluster works fine most of
n up small segments but many? Up until now I use the default
configurations for log segment size. Should I reduce it?
Anyone any other ideas?
Best,
Claudia
-Ursprüngliche Nachricht-
Von: Claudia Wegmann
Gesendet: Mittwoch, 19. Dezember 2018 08:50
An: users@kafka.apache.org
B
e've found that rolling broker restarts
with 0.11 are rather easy and not to be feared.
Kind regards,
Liam Clarke
On Tue, Dec 18, 2018 at 10:43 PM Claudia Wegmann
wrote:
> Hi Liam,
>
> thanks for the pointer. I found out, that the log cleaner on all kafka
> brokers died with the f
: Re: Configuration of log compaction
Hi Claudia,
Anything useful in the log cleaner log files?
Cheers,
Liam Clarke
On Tue, 18 Dec. 2018, 3:18 am Claudia Wegmann Hi,
>
> thanks for the quick response.
>
> My problem is not, that no new segments are created, but that segments
>
full to ensure that
retention can delete or compact old data. long 60480 [1,...] log.roll.ms
medium
On Mon, Dec 17, 2018 at 12:28 PM Claudia Wegmann
wrote:
> Dear kafka users,
>
> I've got a problem on one of my kafka clusters. I use this cluster
> with kafka streams applic
Dear kafka users,
I've got a problem on one of my kafka clusters. I use this cluster with kafka
streams applications. Some of this stream apps use a kafka state store.
Therefore a changelog topic is created for those stores with cleanup policy
"compact". One of these topics is running wild for
not, unless it was caused by an unclean shutdown, in which case
shutdown cleanly. :)
2) probably not, since the indexes are rebuilt.
FWIW i've done a bunch of 1.1.0 -> 2.0 upgrades and haven't had this issue.
For sure have seen it in the past though.
On Tue, Oct 9, 2018 at 2:09 AM Claudia Wegm
Dear community,
I updated my kafka cluster from version 1.1.0 to 2.0.0 according to the upgrade
guide for rolling upgrades today. I encountered a problem after starting the
new broker version. The log is full of
"Found a corrupted index file corresponding to log file
etries.
-Matthias
On 5/15/18 6:30 AM, Claudia Wegmann wrote:
> Hey there,
>
> I've got a few Kafka Streams services which run smoothly most of the time.
> Sometimes, however, some of them get an exception "Abort sending since an
> error caught with a previous record"
Hey there,
I've got a few Kafka Streams services which run smoothly most of the time.
Sometimes, however, some of them get an exception "Abort sending since an error
caught with a previous record" (see below for a full example). The Stream
Service having this exception just stops its work
Hey,
I face the same problem and decided to go with your third solution. I use
Groovy as the scripting language, which has access to Java classes and
therefore also to Flink constructs like Time.seconds(10). See below for an
example of a pattern definition with Groovy:
private static Binding
when using docker images, for example. I could imagine that this could be
helpful for your use case as well.
Cheers,
Till
On Mon, Aug 1, 2016 at 10:40 PM, Claudia Wegmann
<c.wegm...@kasasi.de<mailto:c.wegm...@kasasi.de>> wrote:
Hi Till,
thanks for the quick reply. Too bad,
he max value at the specified position, the other
fields of the tuple/pojo are undefined
- maxBy() returns a tuple with the max value at the specified position (the
other fields are retained)
On Mon, Aug 8, 2016 at 2:55 PM, Claudia Wegmann
<c.wegm...@kasasi.de<mailto:c.wegm...@kasasi.de
OK, found my mistake reagarding question 2.). I key by the id value and gave
all the data sets different values there. So of course all 4 data sets are
printed. Sorry :)
But question 1.) still remains.
Von: Claudia Wegmann [mailto:c.wegm...@kasasi.de]
Gesendet: Montag, 8. August 2016 14:27
Hey,
I have some questions to aggregate functions such as max or min. Take the
following example:
//create Stream with event time where Data contains an ID, a timestamp and a
temperature value
DataStream oneStream = env.fromElements(
new Data(123, new Date(116, 8,8,11,11,11), 5),
. However, you would only be able to scale within a
single Flink job and not across Flink jobs.
Cheers,
Till
On Mon, Aug 1, 2016 at 9:49 PM, Claudia Wegmann
<c.wegm...@kasasi.de<mailto:c.wegm...@kasasi.de>> wrote:
Hey everyone,
I’ve got some questions regarding savepoints in F
Hey everyone,
I've got some questions regarding savepoints in Flink. I have the following
situation:
There is a microservice that reads data from Kafka topics, creates Flink
streams from this data and does different computations/pattern matching
workloads. If the overall workload for this
to be recompiled? Do I overlook something important?
Thx for some input.
Best,
Claudia
Von: Claudia Wegmann
Gesendet: Freitag, 15. Juli 2016 12:58
An: 'user@flink.apache.org' <user@flink.apache.org>
Betreff: AW: dynamic streams and patterns
Hey Robert,
thanks for the valuable input.
to deploy, and a simple testing
job is implemented in a few hours, I would suggest to do some experiments to
see how Flink behaves in the given environment.
Regards,
Robert
On Mon, Jul 11, 2016 at 9:39 AM, Claudia Wegmann
<c.wegm...@kasasi.de<mailto:c.wegm...@kasasi.de>> wrote:
Hey ev
Hey everyone,
I'm quite new to Apache Flink. I'm trying to build a system with Flink and
wanted to hear your opinion and whether the proposed architecture is even
possible with Flink. The environment for the system will be a microservice
architecture handling messaging via async events.
I
25 matches
Mail list logo