Hello Balaji
Yes. The State holder 'sum' in my example is actually created outside the
Mapper objects; so it stays where it is. I am creating 'var's inside the
Mapper objects to _refer_ to the same object, irrespective of multiplicity
of the Mappers. The _open_
Hello Balaji ,
Thanks for your reply. This confirms my earlier assumption that one of
usual ways to do it is to hold and nurture the application-state in an
external body; in your case: Redis.
So, I am trying to understand how does one share the handle to this
Hello all,
Let's say I want to hold some state value derived during one
transformation, and then use that same state value in a subsequent
transformation? For example:
myStream
.keyBy(fieldID) // Some field ID, may be 0
.map(new MyStatefulMapper())
.map(new MySubsequentMapper())
Now, I
If you have built castles in the air, your work need not be lost. That is
where they should be.
Now put the foundation under them."
[image: --]
Nirmalya Sengupta
[image: https://]about.me/sengupta.nirmalya
<https://about.me/sengupta.nirmalya?promo=email_sig_source=email_sig_medium=ext
Hello Flinksters,
I have come across two terms in this presentation:
http://www.slideshare.net/sbaltagi/flink-vs-spark
(a) Hopping Windows
Could someone please exemplify or point to a link which explains, what is
this?
(b) Native support for integrated datastore
Is this referring to the various
Hello Aljoscha,
I am using Flink 0.10.1 (see below) and flinkspector (0.1-SNAPSHOT).
-
org.apache.flink
flink-scala
0.10.1
org.apache.flink
flink-streaming-scala
0.10.1
org.apache.flink
flink-clients
0.10.1
Hello Aljoscha,
I have also tried by using the field's name in the sum("field3") function
(like you have suggested), but this time the exception is different:
Exception in thread "main" java.lang.ExceptionInInitializerError
at
Hello Flinksters,
I am trying to use Flinkspector in a Scala code snippet of mine and Flink
is complaining. The code is here:
---
case class
Hello Alex ,
Many thanks for the explanation. '5 different windows' - that's the key. I
missed that completely. Thanks for plugging the hole; I think I understand
the behaviour better now.
I will follow your code-snippet (gist).
A lot more thanks for sharing write-up
Hello lofifnc
I am keen to hear more about this particular thread of discussion. However,
just a silly question: in the first case, why do you say that 'Each 5
times, as expected'! What causes them to appear 5 times? I don't see any
_repeat()_ or _repeatAll()_ in the
Hello Aljoscha ,
My sincere apologies at the beginning, if I seem to repeat the same
question, almost interminably. If it is frustrating you, I seek your
patience but I really want to nail it down in mind. :-)
The point about parallelizing is well taken. I understand why
Hello Aljoscha ,
You mentioned: '.. Yes, this is right if you temperatures don’t have any
other field on which you could partition them. '.
What I am failing to understand is that if temperatures are partitioned on
some other field (in my use-case, I have one such: the
Hello Aljoscha
Thanks very much for clarifying the role of Pre-Aggregation (rather,
Incr-Aggregation, now that I understand the intention). It helps me to
understand. Thanks to Setfano too, for keeping at the original question of
mine.
My current understanding is that if
Hello Stefano
Sorry for the late reply. Many thanks for taking effort to write and share
an example code snippet.
I have been playing with the countWindow behaviour for some weeks now and I
am generally aware of the functionality of countWindowAll(). For my
Hello Stefano
I have tried to implement what I understood from your mail earlier in the
day, but it doesn't seem to produce the result I expect. Here's the code
snippet:
-
val env =
Hello Flinksters,
This is perhaps too trivial for most here in this forum, but I want to have
my understanding clear.
I want to find the average of temperatures coming in as a stream. The tuple
as the following fields:
Hello Stefano
Many thanks for responding so quickly.
Your explanation not only confirms my understanding but gives a much
simpler solution. The facility of associating a specific parallelism to a
given operator didn't strike me at all. You are right that for my
Hello Till ,
>From your prompt reply:
'... the CountTrigger *always* works together with the CountEvictor which
will make sure that only .. ' - that explains it. Thanks. I missed it.
A related question I have is this:
Between the PURGE facility of Trigger and REMOVAL
Hello Aljoscha ,
I have checked again with the (fantastic) blog here:
https://flink.apache.org/news/2015/12/04/Introducing-windows.html
and I have come to understand that the contents of a window-buffer must be
disposed of *only* after the User-defined evaluation function has
Hello all,
Here's a code comment from
org.apache.flink.streaming.api.windowing.triggers.Trigger:
/**
* Result type for trigger methods. This determines what happens
which the window.
*
*
* On {@code FIRE} the pane is evaluated and results are emitted. *The
Hello Aljoscha ,
Thanks again for taking time to explain the behaviour of
CountWindowAll(m,n).
To be honest, the behaviour seems a bit sketchy to me - and probably it
needs a revisit - but if that's the way it is, then that's the way it is!
:-)
-- Nirmalya
--
Software
Hello Fabian (and others),
Sorry to bring up the same flogged topic of CountWindowAll() but I just
want to be sure that I understand it right.
For a dataset like the following (partial):
-
probe-f076c2b0,201,842.53,75.5372,1448028160,29.37
Hello Fabian
A small question: during the course of our recent conversation on the
behaviour of window,trigger and evictor, you had mentioned that if I - the
application programmer - do not attach a trigger to a window, Flink will
attach one by itself. This trigger ensures
Hello Fabian (),
Many thanks for your encouraging words about the blogs. I want to make a
sincere attempt.
To summarise my understanding of the rule of removal of the elements from
the window (after going through your last mail), here are two corollaries:
1) If my workflow
Hello Aljoscha ,
Many thanks for taking time to explain the behaviour of Evictor. The
essence of my original post - about how the guide explains an Evictor - was
this. I think the guide should make this (counterintuitive) explanation of
the parameter to Evictor clearer. May
Hello Fabian,
Thanks for going through my long mail and concise responses. I am just
happy that I was not way off the mark in my understanding.
It seems to me that I would rather wait for your blog before asking more
questions. Not sure, if I will left with enough drive to write my (planned)
Hi Aljoscha ,
Thanks for taking interest in my post and question.
If the Evictor removes elements _before_ the function is applied, then
what happens the first time, the Evictor is acting? That's what I am
failing to understand. At the beginning of the operation on the
Hello there.
I have just started exploring Apache Flink, and it has immediately got me
excited. Because I am a beginner, my questions may be a bit naive. Please
bear with me.
I refer to this particular sentence from Flink 0.10.0 Guide
Hello Fabian/Matthius,
Many thanks for showing interest in my query on SOF. That helps me sustain
my enthusiasm. :-)
After setting parallelism of environment to '1' and replacing _max()_ with
_maxBy()_, I get a list of maximum temperatures but I fail to explain to
myself, how does Flink arrive
Hello Fabian,
A little long mail; please have some patience.
>From your response: ' Let's start by telling me what you actually want to
do ;-) '
At a broad level, I want to write a (series of, perhaps) tutorial of Flink,
where these concepts are brought out by a mix of definition, elaboration,
Hello Fabian,
>From your reply to this thread:
' it is correct that the evictor is called BEFORE the window function is
applied because this is required to support certain types of sliding
windows. '
This is clear to me now. However, my point was about the way it is
described in the User-guide.
31 matches
Mail list logo