ails, I'm just writing this quickly before calling it a
> day. :-)
>
> Cheers,
> Aljoscha
>
> On Tue, 12 Apr 2016 at 18:21 Konstantin Knauf
> <konstantin.kn...@tngtech.com <mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi everyone,
>
> my
this in Flink?
>
>
> Thanks!
>
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
Hi everyone,
my experience with RocksDBStatebackend have left me a little bit
confused. Maybe you guys can confirm that my epxierence is the expected
behaviour ;):
I have run a "performancetest" twice, once with FsStateBackend and once
RocksDBStatebackend in comparison. In this particular test
Hi everyone,
thanks to Robert, I found the problem.
I was setting "recovery.zookeeper.path.root" on the command line with
-yD. Apparently this is currently not supported. You need to set it the
parameter in flink-conf.yaml.
Cheers,
Konstantin
On 05.04.2016 12:52, Konstantin Knauf w
>
> On Tue, Apr 5, 2016 at 8:54 PM, Konstantin Knauf
> <konstantin.kn...@tngtech.com> wrote:
>> To my knowledge flink takes care of deleting old checkpoints (I think it
>> says so in the documentation about savepoints.). In my experience
>> though, if a job
checkpoint files are
usually not cleaned up. So some housekeeping might be necessary.
> Thanks,
> Zach
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/internals/stream_checkpointing.html
> [2] http://arxiv.org/abs/1506.08603
>
--
Konstantin Knauf * konst
er the client uses a
> >> different root path.
> >>
> >> The following seems to happen:
> >> - you use ledf_recovery as the root namespace
> >> - the task managers are connecting (they resolve the JM
> address via
&
s thousands of timers
> for, say, time 15:30:03 it actually only saves one timer.
>
> I created a Jira Issue: https://issues.apache.org/jira/browse/FLINK-3669
>
> Cheers,
> Aljoscha
>
> On Thu, 24 Mar 2016 at 11:30 Konstantin Knauf
> <konstantin.kn...@tngtech.com <ma
a collect?
>
>
> Best Regards,
> Subash Basnet
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
This must be
> an oversight on our part. I’ll make sure that the 1.0 release will have the
> correct behavior.
>> On 17 Feb 2016, at 16:35, Konstantin Knauf <konstantin.kn...@tngtech.com>
>> wrote:
>>
>> Hi everyone,
>>
>> if a DataStream is created
Hi everyone,
if a DataStream is created with .fromElements(...) all windows emit all
buffered records at the end of the stream. I have two questions about this:
1) Is this only the case for streams created with .fromElements() or
does this happen in any streaming application on shutdown?
2) Is
) Is there a configuration option to disable this behaviour, such that
buffered events remaining in windows are just discarded?
In our application it is critical, that only events, which were
explicitly fired are emitted from the windows.
Cheers and thank you,
Konstantin
--
Konstantin Knauf
quot; only for
parallelism 1, with "TimestampExtractor2" it works regardless of the
parallelism. Run from the IDE.
Let me know if you need anything else.
Cheers,
Konstantin
[1] https://gist.github.com/knaufk/d57b5c3c7db576f3350d
On 25.11.2015 21:15, Konstantin Knauf wrote:
> Hi Aljoscha
ld you maybe
> send me example code (and example data if it is necessary to reproduce the
> problem.)? This would really help me pinpoint the problem.
>
> Cheers,
> Aljoscha
>> On 17 Nov 2015, at 21:42, Konstantin Knauf <konstantin.kn...@tngtech.com>
>> wrote:
&
ted.
>
> This should do what you are looking for:
>
> DataStream
> .keyBy(_._1) // key by orginal key
> .timeWindow(..)
> .apply(...) // extract window end time: (origKey, time, agg)
> .keyBy(_._2) // key by time field
>
orry, I know that's not very intuitive, but in Hadoop the settings for
> in different files (hdfs|yarn|core)-site.xml.
>
>
> On Sat, Nov 21, 2015 at 12:48 PM, Konstantin Knauf
> <konstantin.kn...@tngtech.com <mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi U
gt;
> This looks like a problem with picking up the Hadoop config. Can you look
> into the logs to check whether the configuration is picked up? Change the log
> settings to DEBUG in log/log4j.properties for this. And can you provide the
> complete stack trace?
>
> – Ufuk
>
>
as part of the window it
>> triggered, but instead to create a new window for it and have the old window
>> to fire and purge on event time timeout.
>>
>> Take a look and see if it will be useful -
>> https://bitbucket.org/snippets/vstoyak/o9Rqp
>>
>> V
ing of the window after a
>> given number of events have been received.
>>
>> Is it currently possible to do this with the current combination of window
>> assigners and triggers? I am happy to write custom triggers etc, but
>> wanted to make sure it wasn¹t already avail
a look at it? to aljoscha at
> apache.org.
>
> Cheers,
> Aljoscha
>> On 16 Nov 2015, at 13:05, Konstantin Knauf <konstantin.kn...@tngtech.com>
>> wrote:
>>
>> Hi Aljoscha,
>>
>> ok, now I at least understand, why it works with fromElements(...). For
sert a dummy mapper after the Kafka source that just prints
> the element and forwards it? To see if the elements come with a good
> timestamp from Kafka.
>
> Cheers,
> Aljoscha
>> On 15 Nov 2015, at 22:55, Konstantin Knauf <konstantin.kn...@tngtech.com>
>>
Hi everyone,
I have the following issue with Flink (0.10) and Kafka.
I am using a very simple TimestampExtractor like [1], which just
extracts a millis timestamp from a POJO. In my streaming job, I read in
these POJOs from Kafka using the FlinkKafkaConsumer082 like this:
stream =
201 - 222 of 222 matches
Mail list logo