Look great. Any dates for the abstract deadline already ?
On Tue, Apr 7, 2015 at 2:38 PM, Kostas Tzoumas wrote:
> Ah, thanks Sebastian! :-)
>
> On Tue, Apr 7, 2015 at 2:33 PM, Sebastian wrote:
>
>> There are still some "Berlin Buzzwords" snippets in your texts ;)
>>
>> http://flink-forward.org/
Have you tried to replace
import org.apache.flink.streaming.api.environment._
import org.apache.flink.streaming.connectors.kafka
import org.apache.flink.streaming.connectors.kafka.api._
import org.apache.flink.streaming.util.serialization._
With
import org.apache.flink.streaming.api.scala._
imp
The compilation error is because you don't define dependencies to flink
streaming scala.
In SBT , you define something like:
libraryDependencies += "org.apache.flink" % "flink-streaming-scala" % "0.9.0"
On Thu, Jul 16, 2015 at 6:36 AM, Wendong wrote:
> I tried, but got error:
>
> [error] Test
I do the same trick as Wendong to avoid compilation error of sbt (excluding
kafka_$(scala.binary.version) )
I still don't manage to make sbt pass scala.binary.version to maven.
Anwar.
On Mon, Jul 20, 2015 at 9:42 AM, Till Rohrmann wrote:
> Hi Wendong,
>
> why do you exclude the kafka dependenc
Coz I don't like it :-)
No, seriously, sure, I can do it with maven. It worked indeed with maven.
But the rest of our ecosystem uses sbt. That's why.
-Anwar
On Mon, Jul 20, 2015 at 10:28 AM, Till Rohrmann
wrote:
> Why not trying maven instead?
>
> On Mon, Jul 20, 2015
Can you share the build.sbt or the scala , and maybe a small portion of the
code ?
On Wed, Sep 9, 2015 at 12:27 PM, Giancarlo Pagano
wrote:
> Hi,
>
> I’m trying to write a simple test project using Flink streaming and sbt.
> The project is in scala and it’s basically the set version of the mave
Nice indeed :-)
On Mon, Oct 19, 2015 at 3:08 PM, Suneel Marthi
wrote:
> +1 to this.
>
> On Mon, Oct 19, 2015 at 3:00 PM, Fabian Hueske wrote:
>
>> Sounds good +1
>>
>> 2015-10-19 14:57 GMT+02:00 Márton Balassi :
>>
>> > Thanks for starting and big +1 for making it more prominent.
>> >
>> > On
Do you really need to iterate ?
On Mon, Oct 19, 2015 at 5:42 PM, flinkuser wrote:
>
> Here is my code snippet but I find the union operator not workable.
>
> DataStream msgDataStream1 = env.addSource((new
> SocketSource(hostName1,port,'\n',-1))).filter(new
> MessageFilter()).setParallelism(1);
>
Yeah
I had similar problems with kafka in spark streaming. I worked around the
problem by excluding kafka from connector and then adding the library back.
Maybe you can try something like:
libraryDependencies ++= Seq("org.apache.flink" % "flink-scala" % "0.9.1",
"org.apache.flink" % "flink-c
Let me understand your case better here. You have a stream of model and
stream of data. To process the data, you will need a way to access your
model from the subsequent stream operations (map, filter, flatmap, ..).
I'm not sure in which case Operator State is a good choice, but I think you
can als
>> for this one.
>>
>> I just want to gauge do i need to use memory cache or operator state
>> would be just fine.
>>
>> However i'm concern about the Gen 2 Garbage Collection for caching our
>> own state without using operator state. Is there any clar
Broadcast is what we do for the same type of your initial problem indeed.
In another thread, Stephan mentioned a possibility of using OperatorState
in ConnectedStream. I think this approach using OperatorState does the
business as well.
In my understanding, the approach using broadcast will requi
Hi all,
>From the documentation:
"The Trigger specifies when the function that comes after the window clause
(e.g., sum, count) is evaluated (“fires”) for each window."
So, basically, if I specify:
keyedStream
.window(TumblingTimeWindows.of(Time.of(5, TimeUnit.SECONDS))
.trigger(CountTri
n of the window. Thus, it is not "or-ed" to the basic window
> > definition.
> >
> > If you want to have an or-ed window condition, you can customize it by
> > specifying your own window definition.
> >
> > > dataStream.window(new MyOwnWindow() extends Windo
ot starting a new 5 minute window).
>
> Cheers, Fabian
>
>
> 2015-11-27 14:59 GMT+01:00 Anwar Rizal :
>
>> Thanks Fabian and Aljoscha,
>>
>> I try to implement the trigger as you described as follow:
>>
>> https://gist.github.com/anonymous/d0578a4d27768a7
Allow me to jump to this very interesting discussion.
The 2nd point is actually an interesting question.
I understand that we can set a timestamp of event in Flink. What if we set
the timestamp to somewhere in the future, for example 24 hours from now ?
Can Flink handle this case ?
Also , I'm s
Hi All,
I saw Till's blog preparation. It will be a very helpful blog. I hope that
some other blogs that explain how it works will come soon :-)
I have a question on followedBy pattern matching semantic.
>From the documentation
https://ci.apache.org/projects/flink/flink-docs-master/apis/stream
answers your question Anwar.
>
> Cheers,
> Till
>
>
> On Mon, Apr 4, 2016 at 11:18 AM, Anwar Rizal wrote:
>
>> Hi All,
>>
>>
>> I saw Till's blog preparation. It will be a very helpful blog. I hope
>> that some other blogs that explain how
18 matches
Mail list logo